title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 2. Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE | Chapter 2. Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE In OpenShift Container Platform version 4.13, you can install a cluster on IBM Z or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.13 on the following IBM hardware: IBM z16 (all models), IBM z15 (all models), IBM z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.2 or later On your z/VM instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.2 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE in IBM Documentation. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z & IBM(R) LinuxONE environments 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. The machines are configured with static IP addresses. No DHCP server is required. Ensure that the machines have persistent IP addresses and hostnames. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z 2.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 2.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 2.15. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Configuring NBDE with static IP in an IBM Z or IBM(R) LinuxONE environment Enabling NBDE disk encryption in an IBM Z or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.13.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append \ ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none \ --dest-karg-append nameserver=<nameserver-ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign \ ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ 2 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 3 zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 1 For installations on DASD-type disks, add coreos.inst.install_dev=/dev/dasda . Omit this value for FCP-type disks. 2 For installations on FCP-type disks, add zfcp.allow_lun_scan=0 . Omit this value for DASD-type disks. 3 For installations on DASD-type disks, replace with rd.dasd=0.0.3490 to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Post-installation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 2.13.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.13.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.14. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.16. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.17. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 2.17.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.17.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.17.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.18. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 2.19. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 2.20. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.13.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none --dest-karg-append nameserver=<nameserver-ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 2 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 3 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z |
4.18. bind | 4.18. bind 4.18.1. RHSA-2012:0716 - Important: bind security update Updated bind packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. Security Fixes CVE-2012-1667 A flaw was found in the way BIND handled zero length resource data records. A malicious owner of a DNS domain could use this flaw to create specially-crafted DNS resource records that would cause a recursive resolver or secondary server to crash or, possibly, disclose portions of its memory. CVE-2012-1033 A flaw was found in the way BIND handled the updating of cached name server (NS) resource records. A malicious owner of a DNS domain could use this flaw to keep the domain resolvable by the BIND server even after the delegation was removed from the parent DNS zone. With this update, BIND limits the time-to-live of the replacement record to that of the time-to-live of the record being replaced. Users of bind are advised to upgrade to these updated packages, which correct these issues. After installing the update, the BIND daemon (named) will be restarted automatically. 4.18.2. RHBA-2011:1697 - bind bug fix update Updated bind packages that fix several bugs are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the DNS (Domain Name System) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating properly. Bug Fixes BZ# 699951 Prior to this update, the code in libdns which sends DNS requests was not robust enough and suffered from a race condition. If a race condition occurred, the "named" name service daemon logged an error message in the format "zone xxx.xxx.xxx.in-addr.arpa/IN: refresh: failure trying master xxx.xxx.xxx.xxx#53 (source xxx.xxx.xxx.xxx#0): operation canceled" even when zone refresh was successful. This update improves the code to prevent a race condition in libdns and the error no longer occurs in the scenario described. BZ# 700097 A command or script traditionally gives a non-zero exit status to indicate an error. Prior to this update, the nsupdate utility incorrectly returned the exit status "0" (zero) when the target DNS zone did not exist. Consequently, the nsupdate command returned "success" even though the update failed. This update corrects this error and nsupdate now returns the exit status "2" in the scenario described. BZ# 725577 Prior to this update, named did not unload the bind-dyndb-ldap plugin in the correct places in the code. Consequently, named sometimes terminated unexpectedly during reload or stop when the bind-dyndb-ldap plugin was used. This update corrects the code, the plug-in is now unloaded in the correct places, and named no longer crashes in the scenario described. BZ# 693982 A non-writable working directory is a long time feature on all Red Hat systems. Previously, named wrote "the working directory is not writable" as an error to the system log. This update changes the code so that named now writes this information only into the debug log. BZ# 717468 The named initscript lacked the "configtest" option that was available in earlier releases. Consequently, users of the bind initscript could not use the "service named configtest" command. This update adds the option and users can now test their DNS configurations for correct syntax using the "service named configtest" command. All users of bind are advised to upgrade to these updated packages, which fix these bugs. 4.18.3. RHBA-2011:1836 - bind bug fix update Updated bind packages that fix two bugs are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the DNS (Domain Name System) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with the DNS server); and tools for verifying that the DNS server is operating properly. Bug Fixes BZ# 758669 Prior to this update, errors arising on automatic updates of DNSSEC trust anchors were handled incorrectly. Consequently, the named daemon could become unresponsive on shutdown. With this update, the error handling has been improved and named exits on shutdown gracefully. BZ# 758670 Prior to this update, a race condition could occur on validation of DNSSEC-signed NXDOMAIN responses and the named daemon could terminate unexpectedly. With this update, the underlying code has been fixed and the race condition no longer occurs. All users of bind are advised to upgrade to these updated packages, which fix these bugs. 4.18.4. RHBA-2012:0009 - bind bug fix update Updated bind packages that fix one bug are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the DNS (Domain Name System) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with the DNS server); and tools for verifying that the DNS server is operating properly. Bug Fix BZ# 769366 The multi-threaded named daemon uses the atomic operations feature to speed-up an access to shared data. This feature did not work correctly on the 32-bit and 64-bit PowerPC architectures. Therefore, the named daemon sometimes became unresponsive on these architectures. This update disables the atomic operations feature on the 32-bit and 64-bit PowerPC architectures, which ensures that the named daemon is now more stable, reliable and no longer hangs. All users of bind are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/bind |
Chapter 1. Overview | Chapter 1. Overview NET images are added to OpenShift by importing imagestream definitions from s2i-dotnetcore . The imagestream definitions include the dotnet imagestream which contains sdk images for different supported versions of .NET. Life Cycle and Support Policies for the .NET Program provides an up-to-date overview of supported versions. Version Tag Alias .NET 6.0 dotnet:6.0-ubi8 dotnet:6.0 .NET 7.0 dotnet:7.0-ubi8 dotnet:7.0 .NET 8.0 dotnet:8.0-ubi8 dotnet:8.0 The sdk images have corresponding runtime images which are defined under the dotnet-runtime imagestream. The container images work across different versions of Red Hat Enterprise Linux and OpenShift. The UBI-8 based images (suffix -ubi8) are hosted on the registry.access.redhat.com and do not require authentication. | null | https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_openshift_container_platform/con_overview-of-dotnet-on-openshift_getting-started-with-dotnet-on-openshift |
23.15.5. Recommended Partitioning Scheme | 23.15.5. Recommended Partitioning Scheme Configuring efficient swap space for Linux on System z is a complex task. It very much depends on the specific environment and should be tuned to the actual system load. Refer to the following resources for more information and to guide your decision: 'Chapter 7. Linux Swapping' in the IBM Redbooks publication Linux on IBM System z: Performance Measurement and Tuning [ IBM Form Number SG24-6926-01 ], [ ISBN 0738485586 ], available from http://www.redbooks.ibm.com/abstracts/sg246926.html Linux Performance when running under VM , available from http://www.vm.ibm.com/perf/tips/linuxper.html | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-diskpartrecommend-s390 |
Chapter 2. Getting Started: Overview | Chapter 2. Getting Started: Overview This chapter provides a summary procedure for setting up a basic Red Hat High Availability cluster consisting of two nodes running Red Hat Enterprise Linux release 6. This procedure uses the luci user interface to create the cluster. While this procedure creates a basic cluster, it does not yield a complete supported cluster configuration. Further details on planning and deploying a cluster are provided in the remainder of this document. 2.1. Installation and System Setup Before creating a Red Hat High Availability cluster, perform the following setup and installation steps. Ensure that your Red Hat account includes the following support entitlements: RHEL: Server Red Hat Applications: High availability Red Hat Applications: Resilient Storage, if using the Clustered Logical Volume Manager (CLVM) and GFS2 file systems. Register the cluster systems for software updates, using either Red Hat Subscriptions Manager (RHSM) or RHN Classic. On each node in the cluster, configure the iptables firewall. The iptables firewall can be disabled, or it can be configured to allow cluster traffic to pass through. To disable the iptables system firewall, execute the following commands. For information on configuring the iptables firewall to allow cluster traffic to pass through, see Section 3.3, "Enabling IP Ports" . On each node in the cluster, configure SELinux. SELinux is supported on Red Hat Enterprise Linux 6 cluster nodes in Enforcing or Permissive mode with a targeted policy, or it can be disabled. To check the current SELinux state, run the getenforce : For information on enabling and disabling SELinux, see the Security-Enhanced Linux user guide. Install the cluster packages and package groups. On each node in the cluster, install the High Availability and Resiliant Storage package groups. On the node that will be hosting the web management interface, install the luci package. | [
"service iptables stop chkconfig iptables off",
"getenforce Permissive",
"yum groupinstall 'High Availability' 'Resilient Storage'",
"yum install luci"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-startup-CA |
Part II. Configuration and Administration | Part II. Configuration and Administration The second part of Red Hat Enterprise Linux 7 Desktop Migration and Administration Guide describes and explains various ways the GNOME Desktop can be configured and administered. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/part-configuration_and_administration |
19.3. Unmounting a File System | 19.3. Unmounting a File System To detach a previously mounted file system, use either of the following variants of the umount command: Note that unless this is performed while logged in as root , the correct permissions must be available to unmount the file system. For more information, see Section 19.2.2, "Specifying the Mount Options" . See Example 19.9, "Unmounting a CD" for an example usage. Important When a file system is in use (for example, when a process is reading a file on this file system, or when it is used by the kernel), running the umount command fails with an error. To determine which processes are accessing the file system, use the fuser command in the following form: For example, to list the processes that are accessing a file system mounted to the /media/cdrom/ directory: Example 19.9. Unmounting a CD To unmount a CD that was previously mounted to the /media/cdrom/ directory, use the following command: | [
"umount directory USD umount device",
"fuser -m directory",
"fuser -m /media/cdrom /media/cdrom: 1793 2013 2022 2435 10532c 10672c",
"umount /media/cdrom"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/sect-Using_the_mount_Command-Unmounting |
4.266. redhat-release | 4.266. redhat-release 4.266.1. RHEA-2011:1743 - redhat-release enhancement update for Red Hat Enterprise Linux 6.2 An enhanced redhat-release package is now available for Red Hat Enterprise Linux 6.2. The redhat-release package contains licensing information regarding, and identifies the installed version of, Red Hat Enterprise Linux. This updated redhat-release package reflects changes made for the release of Red Hat Enterprise Linux 6.2. Users of Red Hat Enterprise Linux 6 are advised to upgrade to this updated redhat-release package, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/redhat-release |
Chapter 3. Administering the Environment | Chapter 3. Administering the Environment 3.1. Administering the Self-Hosted Engine 3.1.1. Maintaining the Self-hosted engine 3.1.1.1. Self-hosted engine maintenance modes explained The maintenance modes enable you to start, stop, and modify the Manager virtual machine without interference from the high-availability agents, and to restart and modify the self-hosted engine nodes in the environment without interfering with the Manager. There are three maintenance modes: global - All high-availability agents in the cluster are disabled from monitoring the state of the Manager virtual machine. The global maintenance mode must be applied for any setup or upgrade operations that require the ovirt-engine service to be stopped, such as upgrading to a later version of Red Hat Virtualization. local - The high-availability agent on the node issuing the command is disabled from monitoring the state of the Manager virtual machine. The node is exempt from hosting the Manager virtual machine while in local maintenance mode; if hosting the Manager virtual machine when placed into this mode, the Manager will migrate to another node, provided there is one available. The local maintenance mode is recommended when applying system changes or updates to a self-hosted engine node. none - Disables maintenance mode, ensuring that the high-availability agents are operating. 3.1.1.2. Setting local maintenance mode Enabling local maintenance mode stops the high-availability agent on a single self-hosted engine node. Setting the local maintenance mode from the Administration Portal Put a self-hosted engine node into local maintenance mode: In the Administration Portal, click Compute Hosts and select a self-hosted engine node. Click Management Maintenance and OK . Local maintenance mode is automatically triggered for that node. After you have completed any maintenance tasks, disable the maintenance mode: In the Administration Portal, click Compute Hosts and select the self-hosted engine node. Click Management Activate . Setting the local maintenance mode from the command line Log in to a self-hosted engine node and put it into local maintenance mode: After you have completed any maintenance tasks, disable the maintenance mode: 3.1.1.3. Setting global maintenance mode Enabling global maintenance mode stops the high-availability agents on all self-hosted engine nodes in the cluster. Setting the global maintenance mode from the Administration Portal Put all of the self-hosted engine nodes into global maintenance mode: In the Administration Portal, click Compute Hosts and select any self-hosted engine node. Click More Actions ( ), then click Enable Global HA Maintenance . After you have completed any maintenance tasks, disable the maintenance mode: In the Administration Portal, click Compute Hosts and select any self-hosted engine node. Click More Actions ( ), then click Disable Global HA Maintenance . Setting the global maintenance mode from the command line Log in to any self-hosted engine node and put it into global maintenance mode: After you have completed any maintenance tasks, disable the maintenance mode: 3.1.2. Administering the Manager Virtual Machine The hosted-engine utility provides many commands to help administer the Manager virtual machine. You can run hosted-engine on any self-hosted engine node. To see all available commands, run hosted-engine --help . For additional information on a specific command, run hosted-engine -- command --help . 3.1.2.1. Updating the Self-Hosted Engine Configuration To update the self-hosted engine configuration, use the hosted-engine --set-shared-config command. This command updates the self-hosted engine configuration on the shared storage domain after the initial deployment. To see the current configuration values, use the hosted-engine --get-shared-config command. To see a list of all available configuration keys and their corresponding types, enter the following command: # hosted-engine --set-shared-config key --type= type --help Where type is one of the following: he_local Sets values in the local instance of /etc/ovirt-hosted-engine/hosted-engine.conf on the local host, so only that host uses the new values. To enable the new value, restart the ovirt-ha-agent and ovirt-ha-broker services. he_shared Sets values in /etc/ovirt-hosted-engine/hosted-engine.conf on shared storage, so all hosts that are deployed after a configuration change use these values. To enable the new value on a host, redeploy that host. ha Sets values in /var/lib/ovirt-hosted-engine-ha/ha.conf on local storage. New settings take effect immediately. broker Sets values in /var/lib/ovirt-hosted-engine-ha/broker.conf on local storage. Restart the ovirt-ha-broker service to enable new settings. 3.1.2.2. Configuring Email Notifications You can configure email notifications using SMTP for any HA state transitions on the self-hosted engine nodes. The keys that can be updated include: smtp-server , smtp-port , source-email , destination-emails , and state_transition . To configure email notifications: On a self-hosted engine node, set the smtp-server key to the desired SMTP server address: # hosted-engine --set-shared-config smtp-server smtp.example.com --type=broker Note To verify that the self-hosted engine configuration file has been updated, run: # hosted-engine --get-shared-config smtp-server --type=broker broker : smtp.example.com, type : broker Check that the default SMTP port (port 25) has been configured: Specify an email address you want the SMTP server to use to send out email notifications. Only one address can be specified. # hosted-engine --set-shared-config source-email [email protected] --type=broker Specify the destination email address to receive email notifications. To specify multiple email addresses, separate each address by a comma. # hosted-engine --set-shared-config destination-emails [email protected] , [email protected] --type=broker To verify that SMTP has been properly configured for your self-hosted engine environment, change the HA state on a self-hosted engine node and check if email notifications were sent. For example, you can change the HA state by placing HA agents into maintenance mode. See Maintaining the Self-Hosted Engine for more information. 3.1.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts If the Manager virtual machine shuts down or needs to be migrated, there must be enough memory on a self-hosted engine node for the Manager virtual machine to restart on or migrate to it. This memory can be reserved on multiple self-hosted engine nodes by using a scheduling policy. The scheduling policy checks if enough memory to start the Manager virtual machine will remain on the specified number of additional self-hosted engine nodes before starting or migrating any virtual machines. See Creating a Scheduling Policy in the Administration Guide for more information about scheduling policies. To add more self-hosted engine nodes to the Red Hat Virtualization Manager, see Adding self-hosted engine nodes to the Manager . Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts Click Compute Clusters and select the cluster containing the self-hosted engine nodes. Click Edit . Click the Scheduling Policy tab. Click + and select HeSparesCount . Enter the number of additional self-hosted engine nodes that will reserve enough free memory to start the Manager virtual machine. Click OK . 3.1.4. Adding Self-Hosted Engine Nodes to the Red Hat Virtualization Manager Add self-hosted engine nodes in the same way as a standard host, with an additional step to deploy the host as a self-hosted engine node. The shared storage domain is automatically detected and the node can be used as a failover host to host the Manager virtual machine when required. You can also attach standard hosts to a self-hosted engine environment, but they cannot host the Manager virtual machine. Have at least two self-hosted engine nodes to ensure the Manager virtual machine is highly available. You can also add additional hosts using the REST API. See Hosts in the REST API Guide . Prerequisites All self-hosted engine nodes must be in the same cluster. If you are reusing a self-hosted engine node, remove its existing self-hosted engine configuration. See Removing a Host from a Self-Hosted Engine Environment . Procedure In the Administration Portal, click Compute Hosts . Click New . For information on additional host settings, see Explanation of Settings and Controls in the New Host and Edit Host Windows in the Administration Guide . Use the drop-down list to select the Data Center and Host Cluster for the new host. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field. Select an authentication method to use for the Manager to access the host. Enter the root user's password to use password authentication. Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication. Optionally, configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide . Click the Hosted Engine tab. Select Deploy . Click OK . 3.1.5. Reinstalling an Existing Host as a Self-Hosted Engine Node You can convert an existing, standard host in a self-hosted engine environment to a self-hosted engine node capable of hosting the Manager virtual machine. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Procedure Click Compute Hosts and select the host. Click Management Maintenance and OK . Click Installation Reinstall . Click the Hosted Engine tab and select DEPLOY from the drop-down list. Click OK . The host is reinstalled with self-hosted engine configuration, and is flagged with a crown icon in the Administration Portal. 3.1.6. Booting the Manager Virtual Machine in Rescue Mode This topic describes how to boot the Manager virtual machine into rescue mode when it does not start. For more information, see Booting to Rescue Mode in the Red Hat Enterprise Linux System Administrator's Guide . Connect to one of the hosted-engine nodes: USD ssh root@ host_address Put the self-hosted engine in global maintenance mode: # hosted-engine --set-maintenance --mode=global Check if there is already a running instance of the Manager virtual machine: # hosted-engine --vm-status If a Manager virtual machine instance is running, connect to its host: # ssh root@ host_address Shut down the virtual machine: # hosted-engine --vm-shutdown Note If the virtual machine does not shut down, execute the following command: Start the Manager virtual machine in pause mode: hosted-engine --vm-start-paused Set a temporary VNC password: hosted-engine --add-console-password The command outputs the necessary information you need to log in to the Manger virtual machine with VNC. Log in to the Manager virtual machine with VNC. The Manager virtual machine is still paused, so it appears to be frozen. Resume the Manager virtual machine with the following command on its host: Warning After running the following command, the boot loader menu appears. You need to enter into rescue mode before the boot loader proceeds with the normal boot process. Read the step about entering into rescue mode before proceeding with this command. # /usr/bin/virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine Boot the Manager virtual machine in rescue mode. Disable global maintenance mode # hosted-engine --set-maintenance --mode=none You can now run rescue tasks on the Manager virtual machine. 3.1.7. Removing a Host from a Self-Hosted Engine Environment To remove a self-hosted engine node from your environment, place the node into maintenance mode, undeploy the node, and optionally remove it. The node can be managed as a regular host after the HA services have been stopped, and the self-hosted engine configuration files have been removed. Procedure In the Administration Portal, click Compute Hosts and select the self-hosted engine node. Click Management Maintenance and OK . Click Installation Reinstall . Click the Hosted Engine tab and select UNDEPLOY from the drop-down list. This action stops the ovirt-ha-agent and ovirt-ha-broker services and removes the self-hosted engine configuration file. Click OK . Optionally, click Remove . This opens the Remove Host(s) confirmation window. Click OK . 3.1.8. Updating a Self-Hosted Engine To update a self-hosted engine from your current version to the latest version, you must place the environment in global maintenance mode and then follow the standard procedure for updating between minor versions. Enabling global maintenance mode You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine. Procedure Log in to one of the self-hosted engine nodes and enable global maintenance mode: # hosted-engine --set-maintenance --mode=global Confirm that the environment is in global maintenance mode before proceeding: # hosted-engine --vm-status You should see a message indicating that the cluster is in global maintenance mode. Updating the Red Hat Virtualization Manager Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated: Disable global maintenance mode Reboot the machine to complete the update. Related Information Disabling global maintenance mode Disabling global maintenance mode Procedure Log in to the Manager virtual machine and shut it down. Log in to one of the self-hosted engine nodes and disable global maintenance mode: # hosted-engine --set-maintenance --mode=none When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start. Confirm that the environment is running: # hosted-engine --vm-status The listed information includes Engine Status . The value for Engine status should be: Note When the virtual machine is still booting and the Manager hasn't started yet, the Engine status is: If this happens, wait a few minutes and try again. 3.1.9. Changing the FQDN of the Manager in a Self-Hosted Engine You can use the ovirt-engine-rename command to update records of the fully qualified domain name (FQDN) of the Manager. For details see Renaming the Manager with the Ovirt Engine Rename Tool . | [
"hosted-engine --set-maintenance --mode=local",
"hosted-engine --set-maintenance --mode=none",
"hosted-engine --set-maintenance --mode=global",
"hosted-engine --set-maintenance --mode=none",
"hosted-engine --set-shared-config key --type= type --help",
"hosted-engine --set-shared-config smtp-server smtp.example.com --type=broker",
"hosted-engine --get-shared-config smtp-server --type=broker broker : smtp.example.com, type : broker",
"hosted-engine --get-shared-config smtp-port --type=broker broker : 25, type : broker",
"hosted-engine --set-shared-config source-email [email protected] --type=broker",
"hosted-engine --set-shared-config destination-emails [email protected] , [email protected] --type=broker",
"ssh root@ host_address",
"hosted-engine --set-maintenance --mode=global",
"hosted-engine --vm-status",
"ssh root@ host_address",
"hosted-engine --vm-shutdown",
"hosted-engine --vm-poweroff",
"hosted-engine --vm-start-paused",
"hosted-engine --add-console-password",
"/usr/bin/virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine",
"hosted-engine --set-maintenance --mode=none",
"hosted-engine --set-maintenance --mode=global",
"hosted-engine --vm-status",
"engine-upgrade-check",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"yum update --nobest",
"hosted-engine --set-maintenance --mode=none",
"hosted-engine --vm-status",
"{\"health\": \"good\", \"vm\": \"up\", \"detail\": \"Up\"}",
"{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Powering up\"}"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/part-Administering_the_Environment |
probe::signal.flush | probe::signal.flush Name probe::signal.flush - Flushing all pending signals for a task Synopsis signal.flush Values task The task handler of the process performing the flush pid_name The name of the process associated with the task performing the flush name Name of the probe point sig_pid The PID of the process associated with the task performing the flush | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-flush |
Chapter 1. Overview of the Insights for RHEL vulnerability service | Chapter 1. Overview of the Insights for RHEL vulnerability service The vulnerability service enables quick assessment and comprehensive monitoring of the exposure of your RHEL infrastructure to Common Vulnerabilities and Exposures (CVEs) so you can better understand your most critical issues and systems and effectively manage remediations. With your data uploaded to the vulnerability service, you can filter and sort groups of systems and CVEs to refine and optimize your views. You can also add context to individual CVEs when they pose an extraordinary risk to systems. After gaining an understanding of your risk exposure, report on the status of the CVEs to appropriate stakeholders, then create Ansible Playbooks to remediate issues to secure your organization. Prerequisites The vulnerability service is available for all supported versions of RHEL 6, 7, 8 and 9. The following conditions must be met before you can use the vulnerability service: Each system has the Insights client installed and registered to the Insights for Red Hat Enterprise Linux application. Follow the {DOC-GET-STARTED} to install the client and register your system(s). The vulnerability service is fully supported for RHEL systems managed by Red Hat Subscription Management (RHSM) and Satellite 6 and later. Using any other means to obtain package updates, other than Satellite 6 with RHSM or RHSM registered with subscription.redhat.com (Customer Portal), can lead to misleading results. Vulnerability service remediations are not fully supported and may not work properly on Satellite 5 and Spacewalk-hosted RHEL systems. Some features require special privileges provided by your organization administrator. Specifically, the ability to view Red Hat Security Advisories (RHSAs) associated with certain CVEs and systems, and to view and patch those vulnerabilities in the Red Hat Insights for Red Hat Enterprise Linux patch service, requires permissions granted through user access. Additional resources Generating Vulnerability Service Reports with FedRAMP 1.1. How the vulnerability service works The vulnerability service uses the Insights client to gather information about your RHEL systems. The client gathers information about the systems and uploads it to the vulnerability service. The vulnerability service then assesses the data against the Red Hat CVE database and security bulletins to determine if there are any outstanding CVEs that could affect the systems, and provides the results of those comparisons. Once the data has been analyzed, you can view and sort the displayed results, assess the risks and priorities of the vulnerabilities, report their status, and create and deploy Ansible Playbooks to remediate them. The goal of the vulnerability service is to enable a repeatable process that protects against security weaknesses in your RHEL infrastructure. 1.2. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. All users on your account have access to most of the data in Insights for Red Hat Enterprise Linux. 1.2.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.2.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.2.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. 1.2.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (⚙) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. Additional resources For more information about user access and permissions, see User Access Configuration Guide for Role-based Access Control (RBAC) with FedRAMP . 1.2.3. User Access roles for vulnerability-service users The following roles enable standard or enhanced access to vulnerability service features in Insights for Red Hat Enterprise Linux: Vulnerability viewer. Read any vulnerability-service resource. Vulnerability administrator. Perform any available operation against any vulnerability-service resource. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_vulnerabilities_on_rhel_systems_with_fedramp/vuln-overview_vulnerability-assess |
function::gid | function::gid Name function::gid - Returns the group ID of a target process. Synopsis Arguments None General Syntax gid: long Description This function returns the group ID of a target process. | [
"function gid:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-gid |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_alt-java_with_red_hat_build_of_openjdk/providing-direct-documentation-feedback_openjdk |
C.3. Opening a Perspective | C.3. Opening a Perspective There are two ways to open a perspective: Using the Open Perspective button on the shortcut bar. Selecting a perspective from the Window > Perspective > Open Perspective menu. To open a perspective by using the shortcut bar button: Click the Open Perspective button . In the Select Perspective dialog, select Teiid Designer and click OK . Figure C.2. Select Perspective Dialog The Teiid Designer perspective is now displayed. There are few additional features of perspectives to take note of. The shortcut bar may contain multiple perspectives. The perspective button which is pressed in, indicates that it is the current perspective. To display the full name of the perspectives, right-click the perspective bar and click Show Text and conversely click Hide Text to only show icons. To quickly switch between open perspectives, select the desired perspective button. Notice that the set of views is different for each of the perspectives. Figure C.3. Workbench Window Title Bar | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/opening_a_perspective |
Appendix A. Encryption Standards | Appendix A. Encryption Standards A.1. Synchronous Encryption A.1.1. Advanced Encryption Standard - AES In cryptography, the Advanced Encryption Standard (AES) is an encryption standard adopted by the U.S. Government. The standard comprises three block ciphers, AES-128, AES-192 and AES-256, adopted from a larger collection originally published as Rijndael. Each AES cipher has a 128-bit block size, with key sizes of 128, 192 and 256 bits, respectively. The AES ciphers have been analyzed extensively and are now used worldwide, as was the case with its predecessor, the Data Encryption Standard (DES). [2] A.1.1.1. AES History AES was announced by National Institute of Standards and Technology (NIST) as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001 after a 5-year standardization process. Fifteen competing designs were presented and evaluated before Rijndael was selected as the most suitable. It became effective as a standard May 26, 2002. It is available in many different encryption packages. AES is the first publicly accessible and open cipher approved by the NSA for top secret information (see the Security section in the Wikipedia article on AES). [3] The Rijndael cipher was developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, and submitted by them to the AES selection process. Rijndael is a portmanteau of the names of the two inventors. [4] A.1.2. Data Encryption Standard - DES The Data Encryption Standard (DES) is a block cipher (a form of shared secret encryption) that was selected by the National Bureau of Standards as an official Federal Information Processing Standard (FIPS) for the United States in 1976 and which has subsequently enjoyed widespread use internationally. It is based on a symmetric-key algorithm that uses a 56-bit key. The algorithm was initially controversial with classified design elements, a relatively short key length, and suspicions about a National Security Agency (NSA) backdoor. DES consequently came under intense academic scrutiny which motivated the modern understanding of block ciphers and their cryptanalysis. [5] A.1.2.1. DES History DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes. There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are unfeasible to mount in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. In recent years, the cipher has been superseded by the Advanced Encryption Standard (AES). [6] In some documentation, a distinction is made between DES as a standard and DES the algorithm which is referred to as the DEA (the Data Encryption Algorithm). [7] [2] "Advanced Encryption Standard." Wikipedia . 14 November 2009 http://en.wikipedia.org/wiki/Advanced_Encryption_Standard [3] "Advanced Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Advanced_Encryption_Standard [4] "Advanced Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Advanced_Encryption_Standard [5] "Data Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Data_Encryption_Standard [6] "Data Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Data_Encryption_Standard [7] "Data Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Data_Encryption_Standard | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/chap-encryption_standards |
Chapter 8. Managing Snapshots | Chapter 8. Managing Snapshots Red Hat Gluster Storage Snapshot feature enables you to create point-in-time copies of Red Hat Gluster Storage volumes, which you can use to protect data. Users can directly access Snapshot copies which are read-only to recover from accidental deletion, corruption, or modification of the data. Figure 8.1. Snapshot Architecture In the Snapshot Architecture diagram, Red Hat Gluster Storage volume consists of multiple bricks (Brick1 Brick2 etc) which is spread across one or more nodes and each brick is made up of independent thin Logical Volumes (LV). When a snapshot of a volume is taken, it takes the snapshot of the LV and creates another brick. Brick1_s1 is an identical image of Brick1. Similarly, identical images of each brick is created and these newly created bricks combine together to form a snapshot volume. Some features of snapshot are: Crash Consistency A crash consistent snapshot is captured at a particular point-in-time. When a crash consistent snapshot is restored, the data is identical as it was at the time of taking a snapshot. Note Currently, application level consistency is not supported. Online Snapshot Snapshot is an online snapshot hence the file system and its associated data continue to be available for the clients even while the snapshot is being taken. Barrier To guarantee crash consistency some of the file operations are blocked during a snapshot operation. These file operations are blocked till the snapshot is complete. All other file operations are passed through. There is a default time-out of 2 minutes, within that time if snapshot is not complete then these file operations are unbarriered. If the barrier is unbarriered before the snapshot is complete then the snapshot operation fails. This is to ensure that the snapshot is in a consistent state. Note Taking a snapshot of a Red Hat Gluster Storage volume that is hosting the Virtual Machine Images is not recommended. Taking a Hypervisor assisted snapshot of a virtual machine would be more suitable in this use case. 8.1. Prerequisites Before using this feature, ensure that the following prerequisites are met: Snapshot is based on thinly provisioned LVM. Ensure the volume is based on LVM2. Red Hat Gluster Storage is supported on Red Hat Enterprise Linux 6.7 and later, Red Hat Enterprise Linux 7.1 and later, and on Red Hat Enterprise Linux 8.2 and later versions. All these versions of Red Hat Enterprise Linux is based on LVM2 by default. For more information, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinprovisioned_volumes.html Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Each brick must be independent thinly provisioned logical volume(LV). All bricks must be online for snapshot creation. The logical volume which contains the brick must not contain any data other than the brick. Linear LVM and thin LV are supported with Red Hat Gluster Storage 3.4 and later. For more information, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/logical_volume_manager_administration/index#LVM_components Recommended Setup The recommended setup for using Snapshot is described below. In addition, you must ensure to read Chapter 19, Tuning for Performance for enhancing snapshot performance: For each volume brick, create a dedicated thin pool that contains the brick of the volume and its (thin) brick snapshots. With the current thin-p design, avoid placing the bricks of different Red Hat Gluster Storage volumes in the same thin pool, as this reduces the performance of snapshot operations, such as snapshot delete, on other unrelated volumes. The recommended thin pool chunk size is 256KB. There might be exceptions to this in cases where we have a detailed information of the customer's workload. The recommended pool metadata size is 0.1% of the thin pool size for a chunk size of 256KB or larger. In special cases, where we recommend a chunk size less than 256KB, use a pool metadata size of 0.5% of thin pool size. For Example To create a brick from device /dev/sda1. Create a physical volume(PV) by using the pvcreate command. Use the correct dataalignment option based on your device. For more information, Section 19.2, "Brick Configuration" Create a Volume Group (VG) from the PV using the following command: Create a thin-pool using the following command: A thin pool of size 1 TB is created, using a chunksize of 256 KB. Maximum pool metadata size of 16 G is used. Create a thinly provisioned volume from the previously created pool using the following command: Create a file system (XFS) on this. Use the recommended options to create the XFS file system on the thin LV. For example, Mount this logical volume and use the mount path as the brick. | [
"pvcreate /dev/sda1",
"vgcreate dummyvg /dev/sda1",
"lvcreate --size 1T --thin dummyvg/dummypool --chunksize 256k --poolmetadatasize 16G --zero n",
"lvcreate --virtualsize 1G --thin dummyvg/dummypool --name dummylv",
"mkfs.xfs -f -i size=512 -n size=8192 /dev/dummyvg/dummylv",
"mount /dev/dummyvg/dummylv /mnt/brick1"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Managing_Snapshots |
Chapter 8. Config [samples.operator.openshift.io/v1] | Chapter 8. Config [samples.operator.openshift.io/v1] Description Config contains the configuration and detailed condition status for the Samples Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConfigSpec contains the desired configuration and state for the Samples Operator, controlling various behavior around the imagestreams and templates it creates/updates in the openshift namespace. status object ConfigStatus contains the actual configuration in effect, as well as various details that describe the state of the Samples Operator. 8.1.1. .spec Description ConfigSpec contains the desired configuration and state for the Samples Operator, controlling various behavior around the imagestreams and templates it creates/updates in the openshift namespace. Type object Property Type Description architectures array (string) architectures determine which hardware architecture(s) to install, where x86_64, ppc64le, and s390x are the only supported choices currently. managementState string managementState is top level on/off type of switch for all operators. When "Managed", this operator processes config and manipulates the samples accordingly. When "Unmanaged", this operator ignores any updates to the resources it watches. When "Removed", it reacts that same wasy as it does if the Config object is deleted, meaning any ImageStreams or Templates it manages (i.e. it honors the skipped lists) and the registry secret are deleted, along with the ConfigMap in the operator's namespace that represents the last config used to manipulate the samples, samplesRegistry string samplesRegistry allows for the specification of which registry is accessed by the ImageStreams for their image content. Defaults on the content in https://github.com/openshift/library that are pulled into this github repository, but based on our pulling only ocp content it typically defaults to registry.redhat.io. skippedImagestreams array (string) skippedImagestreams specifies names of image streams that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. skippedTemplates array (string) skippedTemplates specifies names of templates that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. 8.1.2. .status Description ConfigStatus contains the actual configuration in effect, as well as various details that describe the state of the Samples Operator. Type object Property Type Description architectures array (string) architectures determine which hardware architecture(s) to install, where x86_64 and ppc64le are the supported choices. conditions array conditions represents the available maintenance status of the sample imagestreams and templates. conditions[] object ConfigCondition captures various conditions of the Config as entries are processed. managementState string managementState reflects the current operational status of the on/off switch for the operator. This operator compares the ManagementState as part of determining that we are turning the operator back on (i.e. "Managed") when it was previously "Unmanaged". samplesRegistry string samplesRegistry allows for the specification of which registry is accessed by the ImageStreams for their image content. Defaults on the content in https://github.com/openshift/library that are pulled into this github repository, but based on our pulling only ocp content it typically defaults to registry.redhat.io. skippedImagestreams array (string) skippedImagestreams specifies names of image streams that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. skippedTemplates array (string) skippedTemplates specifies names of templates that should NOT be created/updated. Admins can use this to allow them to delete content they don't want. They will still have to manually delete the content but the operator will not recreate(or update) anything listed here. version string version is the value of the operator's payload based version indicator when it was last successfully processed 8.1.3. .status.conditions Description conditions represents the available maintenance status of the sample imagestreams and templates. Type array 8.1.4. .status.conditions[] Description ConfigCondition captures various conditions of the Config as entries are processed. Type object Required status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. lastUpdateTime string lastUpdateTime is the last time this condition was updated. message string message is a human readable message indicating details about the transition. reason string reason is what caused the condition's last transition. status string status of the condition, one of True, False, Unknown. type string type of condition. 8.2. API endpoints The following API endpoints are available: /apis/samples.operator.openshift.io/v1/configs DELETE : delete collection of Config GET : list objects of kind Config POST : create a Config /apis/samples.operator.openshift.io/v1/configs/{name} DELETE : delete a Config GET : read the specified Config PATCH : partially update the specified Config PUT : replace the specified Config /apis/samples.operator.openshift.io/v1/configs/{name}/status GET : read status of the specified Config PATCH : partially update status of the specified Config PUT : replace status of the specified Config 8.2.1. /apis/samples.operator.openshift.io/v1/configs HTTP method DELETE Description delete collection of Config Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Config Table 8.2. HTTP responses HTTP code Reponse body 200 - OK ConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a Config Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body Config schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 202 - Accepted Config schema 401 - Unauthorized Empty 8.2.2. /apis/samples.operator.openshift.io/v1/configs/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the Config HTTP method DELETE Description delete a Config Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Config Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Config Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Config Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Config schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty 8.2.3. /apis/samples.operator.openshift.io/v1/configs/{name}/status Table 8.15. Global path parameters Parameter Type Description name string name of the Config HTTP method GET Description read status of the specified Config Table 8.16. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Config Table 8.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.18. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Config Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body Config schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/config-samples-operator-openshift-io-v1 |
10.3. Scheduling Policy Elements | 10.3. Scheduling Policy Elements The scheduling_policy element contains the following elements: Table 10.3. Scheduling policy elements Element Type Description Properties policy enumerated The VM scheduling mode for hosts in the cluster. A list of enumerated types are listed in capabilities . thresholds low= high= duration= complex Defines CPU limits for the host. The high attribute controls the highest CPU usage percentage the host can have before being considered overloaded. The low attribute controls the lowest CPU usage percentage the host can have before being considered underutilized. The duration attribute refers to the number of seconds the host needs to be overloaded before the scheduler starts and moves the load to another host. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/scheduling_policy_elements |
function::randint | function::randint Name function::randint - Return a random number between [0,n) Synopsis Arguments n Number past upper limit of range, not larger than 2**20. | [
"randint:long(n:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-randint |
Configuring authentication for Red Hat Satellite users | Configuring authentication for Red Hat Satellite users Red Hat Satellite 6.16 Configure authentication for Satellite users and enable authentication features such as SSO, OTP, or 2FA Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_authentication_for_red_hat_satellite_users/index |
25.3. Examples of Using Automember Groups | 25.3. Examples of Using Automember Groups Note These examples are shown using the CLI; the same configuration can be performed in the web UI. A Note on Creating Default Groups One common environment requirement is to have some sort of default group that users or hosts are added to. There are a couple of different ways to approach that. All entries can be added to a single, global group regardless of what other groups they are also added to. Entries can be added to specific automember groups. If the new entry does not match any autogroup, then it is added to a default or fallback group. These strategies are mutually exclusive. If an entry matches a global group, then it does match an automember group and would, therefore, not be added to the fallback group. 25.3.1. Setting an All Users/Hosts Rule To add all users or all hosts to a single group, use an inclusive regular expression for some attribute (such as cn or fqdn ) which all entries will contain. A regular expression to match all entries is simply .* . For example, to add all hosts to the same host group: Every host added after that is automatically added to the allhosts group: For more information on PCRE patterns, see the pcresyntax(3) man page . 25.3.2. Defining Default Automembership Groups There is a special command to set a default group, automember-default-group-set . This sets the group name ( --default-group ) and group type( --type ), similar to an automember rule, but there is no condition to match. By definition, default group members are unmatched entries. For example: A default group rule can be removed using the automember-default-group-remove command. Since there is only one default group for a group type, it is only necessary to give the group type, not the group name: 25.3.3. Using Automembership Groups with Windows Users When a user is created in IdM, that user is automatically added as a member to the ipausers group (which is the default group for all new users, apart from any automember group). However, when a Windows user is synced over from Active Directory, that user is not automatically added to the ipausers group. New Windows users can be added to the ipausers group, as with users created in Identity Management, by using an automember group. Every Windows user is added with the ntUser object class; that object class can be used as an inclusive filter to identify new Windows users to add to the automember group. First, define the ipausers group as an automember group: Then, use the ntUser object class as a condition to add users: | [
"[jsmith@server ~]USD ipa automember-add-condition --type=hostgroup allhosts --inclusive-regex=.* --key=fqdn -------------------------------- Added condition(s) to \"allhosts\" -------------------------------- Automember Rule: allhosts Inclusive Regex: fqdn=.* ---------------------------- Number of conditions added 1 ----------------------------",
"[jsmith@server ~]USD ipa host-add test.example.com ----------------------------- Added host \"test.example.com\" ----------------------------- Host name: test.example.com Principal name: host/[email protected] Password: False Keytab: False Managed by: test.example.com [jsmith@server ~]USD ipa hostgroup-show allhosts Host-group: allhosts Description: Default hostgroup Member hosts: test.example.com",
"[jsmith@server ~]USD ipa automember-default-group-set --default-group=ipaclients --type=hostgroup [jsmith@server ~]USD ipa automember-default-group-set --default-group=ipausers --type=group",
"[jsmith@server ~]USD ipa automember-default-group-remove --type=hostgroup",
"[jsmith@server ~]USD ipa automember-add --type=group ipausers",
"[jsmith@server ~]USD ipa automember-add-condition ipausers --key=objectclass --type=group --inclusive-regex=ntUser"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/using-automembers-examples |
Chapter 26. Storage [operator.openshift.io/v1] | Chapter 26. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 26.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 26.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 26.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 26.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 26.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 26.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 26.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 26.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/storages DELETE : delete collection of Storage GET : list objects of kind Storage POST : create a Storage /apis/operator.openshift.io/v1/storages/{name} DELETE : delete a Storage GET : read the specified Storage PATCH : partially update the specified Storage PUT : replace the specified Storage /apis/operator.openshift.io/v1/storages/{name}/status GET : read status of the specified Storage PATCH : partially update status of the specified Storage PUT : replace status of the specified Storage 26.2.1. /apis/operator.openshift.io/v1/storages Table 26.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Storage Table 26.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 26.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Storage Table 26.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 26.5. HTTP responses HTTP code Reponse body 200 - OK StorageList schema 401 - Unauthorized Empty HTTP method POST Description create a Storage Table 26.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.7. Body parameters Parameter Type Description body Storage schema Table 26.8. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 202 - Accepted Storage schema 401 - Unauthorized Empty 26.2.2. /apis/operator.openshift.io/v1/storages/{name} Table 26.9. Global path parameters Parameter Type Description name string name of the Storage Table 26.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Storage Table 26.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 26.12. Body parameters Parameter Type Description body DeleteOptions schema Table 26.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Storage Table 26.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 26.15. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Storage Table 26.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.17. Body parameters Parameter Type Description body Patch schema Table 26.18. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Storage Table 26.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.20. Body parameters Parameter Type Description body Storage schema Table 26.21. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty 26.2.3. /apis/operator.openshift.io/v1/storages/{name}/status Table 26.22. Global path parameters Parameter Type Description name string name of the Storage Table 26.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Storage Table 26.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 26.25. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Storage Table 26.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.27. Body parameters Parameter Type Description body Patch schema Table 26.28. HTTP responses HTTP code Reponse body 200 - OK Storage schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Storage Table 26.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 26.30. Body parameters Parameter Type Description body Storage schema Table 26.31. HTTP responses HTTP code Reponse body 200 - OK Storage schema 201 - Created Storage schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/storage-operator-openshift-io-v1 |
Using the AMQ .NET Client | Using the AMQ .NET Client Red Hat AMQ 2021.Q1 For Use with AMQ Clients 2.9 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_.net_client/index |
Security and compliance | Security and compliance OpenShift Container Platform 4.17 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"variant: openshift version: 4.17.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }",
"butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml",
"oc apply -f 51-worker-rh-registry-trust.yaml",
"oc get mc",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1",
"oc debug node/<node_name>",
"sh-4.2# chroot /host",
"docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore",
"docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>",
"oc describe machineconfigpool/worker",
"Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3",
"oc debug node/<node> -- chroot /host cat /etc/containers/policy.json",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }",
"oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore",
"oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml",
"Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2",
"oc adm release info <release_version> \\ 1",
"--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---",
"curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt",
"curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1",
"skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1",
"skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key",
"Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55",
"quality.images.openshift.io/<qualityType>.<providerId>: {}",
"quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}",
"{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }",
"{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }",
"oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'",
"annotations: images.openshift.io/deny-execution: true",
"curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'",
"{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }",
"oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc",
"source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc",
"oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc",
"oc set triggers deploy/deployment-example --from-image=example:latest --containers=web",
"{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }",
"docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore",
"oc get event -n default | grep Node",
"1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure",
"oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'",
"{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }",
"oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'",
"4",
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress",
"oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator",
"oc login -u kubeadmin -p <password> https://FQDN:6443",
"oc config view --flatten > kubeconfig-newapi",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config",
"oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2",
"oc get apiserver cluster -o yaml",
"spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.17.0 True False False 145m",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2",
"oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1",
"oc describe service <service_name>",
"Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837",
"oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true",
"oc get configmap <config_map_name> -o yaml",
"apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----",
"oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true",
"oc get apiservice <api_service_name> -o yaml",
"apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>",
"oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true",
"oc get crd <crd_name> -o yaml",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc describe service <service_name>",
"service.beta.openshift.io/serving-cert-secret-name: <secret>",
"oc delete secret <secret> 1",
"oc get secret <service_name>",
"NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s",
"oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate",
"oc delete secret/signing-key -n openshift-service-ca",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'",
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"cat install-config.yaml",
"proxy: httpProxy: http://<username:[email protected]:123/> httpsProxy: http://<username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt",
"oc annotate -n openshift-kube-apiserver-operator secret kube-apiserver-to-kubelet-signer auth.openshift.io/certificate-not-after-",
"oc get secret -n openshift-etcd etcd-signer -oyaml > signer_backup_secret.yaml",
"oc delete secret -n openshift-etcd etcd-signer",
"oc wait --for=condition=Progressing=False --timeout=15m clusteroperator/etcd",
"oc delete configmap -n openshift-etcd etcd-ca-bundle",
"oc adm wait-for-stable-cluster --minimum-stable-period 2m",
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis",
"oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name==\"must-gather\")].image}')",
"oc get profile.compliance -n openshift-compliance",
"NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1",
"oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8",
"apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight",
"oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1",
"apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>",
"apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4",
"compliance.openshift.io/product-type: Platform/Node",
"apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc get compliancesuites",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name_of_the_suite> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT",
"oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name_of_the_compliance_scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7",
"get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name_of_the_compliance_scan>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2",
"get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3",
"get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'",
"get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" 1",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-compliance",
"oc get deploy -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID",
"oc -n openshift-compliance get profilebundles rhcos4 -oyaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID",
"oc delete ssb --all -n openshift-compliance",
"oc delete ss --all -n openshift-compliance",
"oc delete suite --all -n openshift-compliance",
"oc delete scan --all -n openshift-compliance",
"oc delete profilebundle.compliance --all -n openshift-compliance",
"oc delete sub --all -n openshift-compliance",
"oc delete csv --all -n openshift-compliance",
"oc delete project openshift-compliance",
"project.project.openshift.io \"openshift-compliance\" deleted",
"oc get project/openshift-compliance",
"Error from server (NotFound): namespaces \"openshift-compliance\" not found",
"oc explain scansettings",
"oc explain scansettingbindings",
"oc describe scansettings default -n openshift-compliance",
"Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>",
"Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc create -f <file-name>.yaml -n openshift-compliance",
"oc get compliancescan -w -n openshift-compliance",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *",
"oc create -f rs-workers.yaml",
"oc get scansettings rs-on-workers -n openshift-compliance -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true",
"oc get hostedcluster -A",
"NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3",
"oc create -n openshift-compliance -f mgmt-tp.yaml",
"spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>",
"apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster",
"oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive",
"apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges",
"oc create -n openshift-compliance -f new-profile-node.yaml 1",
"tailoredprofile.compliance.openshift.io/nist-moderate-modified created",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"oc create -n openshift-compliance -f new-scansettingbinding.yaml",
"scansettingbinding.compliance.openshift.io/nist-moderate-modified created",
"oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'",
"{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }",
"oc get pvc -n openshift-compliance rhcos4-moderate-worker",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m",
"oc create -n openshift-compliance -f pod.yaml",
"apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker",
"oc cp pv-extract:/workers-scan-results -n openshift-compliance .",
"oc delete pod pv-extract -n openshift-compliance",
"oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite",
"oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan",
"oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'",
"oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'",
"NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high",
"oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'",
"spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied",
"echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"",
"net.ipv4.conf.all.accept_redirects=0",
"oc get nodes -n openshift-compliance",
"NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.30.3",
"oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=",
"node/ip-10-0-166-81.us-east-2.compute.internal labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"",
"oc get mcp -w",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default",
"oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'",
"oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=",
"oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=",
"NAME STATE workers-scan-no-empty-passwords Outdated",
"oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'",
"oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords",
"NAME STATE workers-scan-no-empty-passwords Applied",
"oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge",
"oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied",
"oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge",
"oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master",
"NAME AGE compliance-operator-kubelet-master 2m34s",
"oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists",
"oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc get mc",
"75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'",
"oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=",
"oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=",
"allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml",
"securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created",
"oc get -n openshift-compliance scc restricted-adjusted-compliance",
"NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]",
"oc get events -n openshift-compliance",
"oc describe -n openshift-compliance compliancescan/cis-compliance",
"oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'",
"date -d @1596184628.955853 --utc",
"oc get -n openshift-compliance profilebundle.compliance",
"oc get -n openshift-compliance profile.compliance",
"oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser",
"oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4",
"oc logs -n openshift-compliance pods/<pod-name>",
"oc describe -n openshift-compliance pod/<pod-name> -c profileparser",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created",
"oc get cronjobs",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m",
"oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=",
"oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels",
"NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner",
"oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod",
"Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>",
"oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium",
"oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker",
"NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge",
"oc get mc | grep 75-",
"75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s",
"oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements",
"Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:",
"oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=",
"oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod",
"NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium",
"oc logs -l workload=<workload_name> -c <container_name>",
"spec: config: resources: limits: memory: 500Mi",
"oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge",
"kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"",
"oc get pod ocp4-pci-dss-api-checks-pod -w",
"NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m",
"timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1",
"oc apply -f scansetting.yaml",
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2",
"podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/",
"W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.",
"oc compliance fetch-raw <object-type> <object-name> -o <output-path>",
"oc compliance fetch-raw scansettingbindings my-binding -o /tmp/",
"Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master",
"ls /tmp/ocp4-cis-node-master/",
"ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2",
"bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml",
"ls resultsdir/worker-scan/",
"worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2",
"oc compliance rerun-now scansettingbindings my-binding",
"Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'",
"oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]",
"oc get profile.compliance -n openshift-compliance",
"NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1",
"oc get scansettings -n openshift-compliance",
"NAME AGE default 10m default-auto-apply 10m",
"oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node",
"Creating ScanSettingBinding my-binding",
"oc compliance controls profile ocp4-cis-node",
"+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+",
"oc compliance fetch-fixes profile ocp4-cis -o /tmp",
"No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml",
"head /tmp/ocp4-api-server-audit-log-maxsize.yaml",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100",
"oc get complianceremediations -n openshift-compliance",
"NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied",
"oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp",
"Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml",
"head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc",
"oc compliance view-result ocp4-cis-scheduler-no-bind-address",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity",
"oc create -f <file-name>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity",
"oc create -f <file-name>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc get csv -n openshift-file-integrity",
"oc get deploy -n openshift-file-integrity",
"apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7",
"oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity",
"oc get fileintegrities -n openshift-file-integrity",
"NAME AGE worker-fileintegrity 14s",
"oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"",
"Active",
"oc get fileintegritynodestatuses",
"NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq",
"oc get fileintegritynodestatuses -w",
"NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded",
"[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]",
"oc debug node/ip-10-0-130-192.ec2.internal",
"Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq",
"[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]",
"oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed",
"Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>",
"oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip",
"oc get events --field-selector reason=FileIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active",
"oc get events --field-selector reason=NodeIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed",
"oc get events --field-selector reason=NodeIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed",
"oc explain fileintegrity.spec",
"oc explain fileintegrity.spec.config",
"oc describe cm/worker-fileintegrity",
"@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX",
"oc extract cm/worker-fileintegrity --keys=aide.conf",
"vim aide.conf",
"/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db",
"!/opt/mydaemon/",
"/hostroot/etc/ CONTENT_EX",
"oc create cm master-aide-conf --from-file=aide.conf",
"apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity",
"oc describe cm/master-fileintegrity | grep /opt/mydaemon",
"!/hostroot/opt/mydaemon",
"oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=",
"ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55",
"oc -n openshift-file-integrity get ds/aide-worker-fileintegrity",
"oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity",
"oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6",
"Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check",
"oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"",
"oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity",
"apiVersion: v1 kind: Namespace metadata: name: openshift-security-profiles labels: openshift.io/cluster-monitoring: \"true\"",
"oc create -f namespace-object.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: security-profiles-operator namespace: openshift-security-profiles",
"oc create -f operator-group-object.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: security-profiles-operator-sub namespace: openshift-security-profiles spec: channel: release-alpha-rhel-8 installPlanApproval: Automatic name: security-profiles-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f subscription-object.yaml",
"oc get csv -n openshift-security-profiles",
"oc get deploy -n openshift-security-profiles",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"verbosity\":1}}'",
"securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched",
"oc new-project my-namespace",
"apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: profile1 spec: defaultAction: SCMP_ACT_LOG",
"apiVersion: v1 kind: Pod metadata: name: test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc -n my-namespace get seccompprofile profile1 --output wide",
"NAME STATUS AGE SECCOMPPROFILE.LOCALHOSTPROFILE profile1 Installed 14s operator/my-namespace/profile1.json",
"oc get sp profile1 --output=jsonpath='{.status.localhostProfile}'",
"operator/my-namespace/profile1.json",
"spec: template: spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json",
"oc -n my-namespace patch deployment myapp --patch-file patch.yaml --type=merge",
"deployment.apps/myapp patched",
"oc -n my-namespace get deployment myapp --output=jsonpath='{.spec.template.spec.securityContext}' | jq .",
"{ \"seccompProfile\": { \"localhostProfile\": \"operator/my-namespace/profile1.json\", \"type\": \"localhost\" } }",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SeccompProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3",
"oc label ns my-namespace spo.x-k8s.io/enable-binding=true",
"apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21",
"oc create -f test-pod.yaml",
"oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seccompProfile}'",
"{\"localhostProfile\":\"operator/my-namespace/profile.json\",\"type\":\"Localhost\"}",
"oc new-project my-namespace",
"oc label ns my-namespace spo.x-k8s.io/enable-recording=true",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SeccompProfile recorder: logs podSelector: matchLabels: app: my-app",
"apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc -n my-namespace get pods",
"NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s",
"oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher",
"I0523 14:19:08.747313 430694 enricher.go:445] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"executable\"=\"/usr/local/bin/redis-server\" \"namespace\"=\"my-namespace\" \"node\"=\"xiyuan-23-5g2q9-worker-eastus2-6rpgf\" \"pid\"=656802 \"pod\"=\"my-pod\" \"syscallID\"=0 \"syscallName\"=\"read\" \"timestamp\"=\"1684851548.745:207179\" \"type\"=\"seccomp\"",
"oc -n my-namepace delete pod my-pod",
"oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME STATUS AGE test-recording-nginx Installed 2m48s test-recording-redis Installed 2m48s",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SeccompProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SeccompProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record",
"oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080",
"oc delete deployment nginx-deploy -n my-namespace",
"oc delete profilerecording test-recording -n my-namespace",
"oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME STATUS AGE test-recording-nginx-record Installed 55s",
"oc get seccompprofiles test-recording-nginx-record -o yaml",
"oc new-project nginx-deploy",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: allow: '@self': tcp_socket: - listen http_cache_port_t: tcp_socket: - name_bind node_t: tcp_socket: - node_bind inherit: - kind: System name: container",
"oc wait --for=condition=ready -n nginx-deploy selinuxprofile nginx-secure",
"selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure condition met",
"oc -n openshift-security-profiles rsh -c selinuxd ds/spod",
"cat /etc/selinux.d/nginx-secure_nginx-deploy.cil",
"(block nginx-secure_nginx-deploy (blockinherit container) (allow process nginx-secure_nginx-deploy.process ( tcp_socket ( listen ))) (allow process http_cache_port_t ( tcp_socket ( name_bind ))) (allow process node_t ( tcp_socket ( node_bind ))) )",
"semodule -l | grep nginx-secure",
"nginx-secure_nginx-deploy",
"oc label ns nginx-deploy security.openshift.io/scc.podSecurityLabelSync=false",
"oc label ns nginx-deploy --overwrite=true pod-security.kubernetes.io/enforce=privileged",
"oc get selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure -n nginx-deploy -ojsonpath='{.status.usage}'",
"nginx-secure_nginx-deploy.process",
"apiVersion: v1 kind: Pod metadata: name: nginx-secure namespace: nginx-deploy spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: nginxinc/nginx-unprivileged:1.21 name: nginx securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] seLinuxOptions: # NOTE: This uses an appropriate SELinux type type: nginx-secure_nginx-deploy.process",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: permissive: true",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SelinuxProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3",
"oc label ns my-namespace spo.x-k8s.io/enable-binding=true",
"apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21",
"oc create -f test-pod.yaml",
"oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seLinuxOptions.type}'",
"profile_nginx-binding.process",
"oc new-project nginx-secure",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-nginx namespace: nginx-secure subjects: - kind: ServiceAccount name: spo-deploy-test roleRef: kind: Role name: spo-nginx apiGroup: rbac.authorization.k8s.io",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-nginx namespace: nginx-secure rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use",
"apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-deploy-test namespace: nginx-secure",
"apiVersion: apps/v1 kind: Deployment metadata: name: selinux-test namespace: nginx-secure metadata: labels: app: selinux-test spec: replicas: 3 selector: matchLabels: app: selinux-test template: metadata: labels: app: selinux-test spec: serviceAccountName: spo-deploy-test securityContext: seLinuxOptions: type: nginx-secure_nginx-secure.process 1 containers: - name: nginx-unpriv image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080",
"oc new-project my-namespace",
"oc label ns my-namespace spo.x-k8s.io/enable-recording=true",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: app: my-app",
"apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc -n my-namespace get pods",
"NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s",
"oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher",
"I0517 13:55:36.383187 348295 enricher.go:376] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"namespace\"=\"my-namespace\" \"node\"=\"ip-10-0-189-53.us-east-2.compute.internal\" \"perm\"=\"name_bind\" \"pod\"=\"my-pod\" \"profile\"=\"test-recording_redis_6kmrb_1684331729\" \"scontext\"=\"system_u:system_r:selinuxrecording.process:s0:c4,c27\" \"tclass\"=\"tcp_socket\" \"tcontext\"=\"system_u:object_r:redis_port_t:s0\" \"timestamp\"=\"1684331735.105:273965\" \"type\"=\"selinux\"",
"oc -n my-namepace delete pod my-pod",
"oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME USAGE STATE test-recording-nginx test-recording-nginx_my-namespace.process Installed test-recording-redis test-recording-redis_my-namespace.process Installed",
"apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SelinuxProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SelinuxProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record",
"oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true",
"apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080",
"oc delete deployment nginx-deploy -n my-namespace",
"oc delete profilerecording test-recording -n my-namespace",
"oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace",
"NAME USAGE STATE test-recording-nginx-record test-recording-nginx-record_my-namespace.process Installed",
"oc get selinuxprofiles test-recording-nginx-record -o yaml",
"oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"allowedSyscalls\": [\"exit\", \"exit_group\", \"futex\", \"nanosleep\"]}}'",
"apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: example-name spec: defaultAction: SCMP_ACT_ERRNO baseProfileName: runc-v1.0.0 syscalls: - action: SCMP_ACT_ALLOW names: - exit_group",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableMemoryOptimization\":true}}'",
"apiVersion: v1 kind: Pod metadata: name: my-recording-pod labels: spo.x-k8s.io/enable-recording: \"true\"",
"oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"daemonResourceRequirements\": { \"requests\": {\"memory\": \"256Mi\", \"cpu\": \"250m\"}, \"limits\": {\"memory\": \"512Mi\", \"cpu\": \"500m\"}}}}'",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"priorityClassName\":\"my-priority-class\"}}'",
"securityprofilesoperatordaemon.openshift-security-profiles.x-k8s.io/spod patched",
"oc get svc/metrics -n openshift-security-profiles",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics ClusterIP 10.0.0.228 <none> 443/TCP 43s",
"oc run --rm -i --restart=Never --image=registry.fedoraproject.org/fedora-minimal:latest -n openshift-security-profiles metrics-test -- bash -c 'curl -ks -H \"Authorization: Bearer USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://metrics.openshift-security-profiles/metrics-spod'",
"HELP security_profiles_operator_seccomp_profile_total Counter about seccomp profile operations. TYPE security_profiles_operator_seccomp_profile_total counter security_profiles_operator_seccomp_profile_total{operation=\"delete\"} 1 security_profiles_operator_seccomp_profile_total{operation=\"update\"} 2",
"oc get clusterrolebinding spo-metrics-client -o wide",
"NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS spo-metrics-client ClusterRole/spo-metrics-client 35m openshift-security-profiles/default",
"oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableLogEnricher\":true}}'",
"securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched",
"oc -n openshift-security-profiles logs -f ds/spod log-enricher",
"I0623 12:51:04.257814 1854764 deleg.go:130] setup \"msg\"=\"starting component: log-enricher\" \"buildDate\"=\"1980-01-01T00:00:00Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"unknown\" \"gitTreeState\"=\"clean\" \"goVersion\"=\"go1.16.2\" \"platform\"=\"linux/amd64\" \"version\"=\"0.4.0-dev\" I0623 12:51:04.257890 1854764 enricher.go:44] log-enricher \"msg\"=\"Starting log-enricher on node: 127.0.0.1\" I0623 12:51:04.257898 1854764 enricher.go:46] log-enricher \"msg\"=\"Connecting to local GRPC server\" I0623 12:51:04.258061 1854764 enricher.go:69] log-enricher \"msg\"=\"Reading from file /var/log/audit/audit.log\" 2021/06/23 12:51:04 Seeked /var/log/audit/audit.log - &{Offset:0 Whence:2}",
"apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: name: log namespace: default spec: defaultAction: SCMP_ACT_LOG",
"apiVersion: v1 kind: Pod metadata: name: log-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: Localhost localhostProfile: operator/default/log.json containers: - name: log-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc -n openshift-security-profiles logs -f ds/spod log-enricher",
"... I0623 12:59:11.479869 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.205:1061\" \"type\"=\"seccomp\" I0623 12:59:11.487323 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1062\" \"type\"=\"seccomp\" I0623 12:59:11.492157 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1063\" \"type\"=\"seccomp\" ... I0623 12:59:20.258523 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=12 \"syscallName\"=\"brk\" \"timestamp\"=\"1624453150.235:2873\" \"type\"=\"seccomp\" I0623 12:59:20.263349 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=21 \"syscallName\"=\"access\" \"timestamp\"=\"1624453150.235:2874\" \"type\"=\"seccomp\" I0623 12:59:20.354091 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2875\" \"type\"=\"seccomp\" I0623 12:59:20.358844 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=5 \"syscallName\"=\"fstat\" \"timestamp\"=\"1624453150.235:2876\" \"type\"=\"seccomp\" I0623 12:59:20.363510 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=9 \"syscallName\"=\"mmap\" \"timestamp\"=\"1624453150.235:2877\" \"type\"=\"seccomp\" I0623 12:59:20.454127 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.235:2878\" \"type\"=\"seccomp\" I0623 12:59:20.458654 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2879\" \"type\"=\"seccomp\" ...",
"spec: webhookOptions: - name: recording.spo.io objectSelector: matchExpressions: - key: spo-record operator: In values: - \"true\"",
"oc -n openshift-security-profiles patch spod spod -p USD(cat /tmp/spod-wh.patch) --type=merge",
"oc get MutatingWebhookConfiguration spo-mutating-webhook-configuration -oyaml",
"oc -n openshift-security-profiles logs openshift-security-profiles-<id>",
"I1019 19:34:14.942464 1 main.go:90] setup \"msg\"=\"starting openshift-security-profiles\" \"buildDate\"=\"2020-10-19T19:31:24Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"a3ef0e1ea6405092268c18f240b62015c247dd9d\" \"gitTreeState\"=\"dirty\" \"goVersion\"=\"go1.15.1\" \"platform\"=\"linux/amd64\" \"version\"=\"0.2.0-dev\" I1019 19:34:15.348389 1 listener.go:44] controller-runtime/metrics \"msg\"=\"metrics server is starting to listen\" \"addr\"=\":8080\" I1019 19:34:15.349076 1 main.go:126] setup \"msg\"=\"starting manager\" I1019 19:34:15.349449 1 internal.go:391] controller-runtime/manager \"msg\"=\"starting metrics server\" \"path\"=\"/metrics\" I1019 19:34:15.350201 1 controller.go:142] controller \"msg\"=\"Starting EventSource\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"source\"={\"Type\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"defaultAction\":\"\"}}} I1019 19:34:15.450674 1 controller.go:149] controller \"msg\"=\"Starting Controller\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" I1019 19:34:15.450757 1 controller.go:176] controller \"msg\"=\"Starting workers\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"worker count\"=1 I1019 19:34:15.453102 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"nginx-1.19.1\" \"name\"=\"nginx-1.19.1\" \"resource version\"=\"728\" I1019 19:34:15.453618 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"openshift-security-profiles\" \"name\"=\"openshift-security-profiles\" \"resource version\"=\"729\"",
"oc exec -t -n openshift-security-profiles openshift-security-profiles-<id> -- ls /var/lib/kubelet/seccomp/operator/my-namespace/my-workload",
"profile-block.json profile-complain.json",
"oc delete MutatingWebhookConfiguration spo-mutating-webhook-configuration",
"oc get packagemanifests -n openshift-marketplace | grep tang",
"tang-operator Red Hat",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tang-operator namespace: openshift-operators spec: channel: stable 1 installPlanApproval: Automatic name: tang-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f tang-operator.yaml",
"oc -n openshift-operators get pods",
"NAME READY STATUS RESTARTS AGE tang-operator-controller-manager-694b754bd6-4zk7x 2/2 Running 0 12s",
"oc -n nbde describe tangserver",
"... Status: Active Keys: File Name: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...",
"apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: - sha1: \"PvYQKtrTuYsMV2AomUeHrUWkCGg\" 1",
"oc apply -f minimal-keyretrieve-rotate-tangserver.yaml",
"oc -n nbde describe tangserver",
"... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Hidden Keys: File Name: .QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg.jwk Generated: 2023-10-25 15:37:29.126928965 +0000 Hidden: 2023-10-25 15:38:13.515467436 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...",
"oc -n nbde describe tangserver",
"... Status: Active Keys: File Name: PvYQKtrTuYsMV2AomUeHrUWkCGg.jwk Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...",
"apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: [] 1",
"oc apply -f hidden-keys-deletion-tangserver.yaml",
"oc -n nbde describe tangserver",
"... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Status: Ready: 1 Running: 1 Service External URL: http://35.222.247.84:7500/adv Tang Server Error: No Events: ...",
"curl 2> /dev/null http://34.28.173.205:7500/adv | jq",
"{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }",
"oc -n nbde describe tangserver",
"... Spec: ... Status: Ready: 1 Running: 1 Service External URL: http://34.28.173.205:7500/adv Tang Server Error: No Events: ...",
"curl 2> /dev/null http://34.28.173.205:7500/adv | jq",
"{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }",
"oc get pods -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s",
"oc new-project cert-manager-operator",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - \"cert-manager-operator\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: [] spec: {}",
"oc create -f operatorGroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic",
"oc create -f subscription.yaml",
"oc get subscription -n cert-manager-operator",
"NAME PACKAGE SOURCE CHANNEL openshift-cert-manager-operator openshift-cert-manager-operator redhat-operators stable-v1",
"oc get csv -n cert-manager-operator",
"NAME DISPLAY VERSION REPLACES PHASE cert-manager-operator.v1.13.0 cert-manager Operator for Red Hat OpenShift 1.13.0 cert-manager-operator.v1.12.1 Succeeded",
"oc get pods -n cert-manager-operator",
"NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-695b4d46cb-r4hld 2/2 Running 0 7m4s",
"oc get pods -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-58b7f649c4-dp6l4 1/1 Running 0 7m1s cert-manager-cainjector-5565b8f897-gx25h 1/1 Running 0 7m37s cert-manager-webhook-9bc98cbdd-f972x 1/1 Running 0 7m40s",
"oc create configmap trusted-ca -n cert-manager",
"oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}}'",
"oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && rollout status deployment/cert-manager -n cert-manager && rollout status deployment/cert-manager-webhook -n cert-manager && rollout status deployment/cert-manager-cainjector -n cert-manager",
"deployment \"cert-manager-operator-controller-manager\" successfully rolled out deployment \"cert-manager\" successfully rolled out deployment \"cert-manager-webhook\" successfully rolled out deployment \"cert-manager-cainjector\" successfully rolled out",
"oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'}",
"[{\"mountPath\":\"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt\",\"name\":\"trusted-ca\",\"subPath\":\"ca-bundle.crt\"}]",
"oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes}",
"[{\"configMap\":{\"defaultMode\":420,\"name\":\"trusted-ca\"},\"name\":\"trusted-ca\"}]",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s",
"oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml",
"env: - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS>",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<server_address>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8",
"oc get pods -n cert-manager -o yaml",
"metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager spec: containers: - args: - --v=4",
"oc get certificate",
"NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--enable-certificate-owner-ref'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml",
"metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager spec: containers: - args: - --enable-certificate-owner-ref",
"oc get deployment -n cert-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m",
"oc get deployment -n cert-manager -o yaml",
"metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3",
"oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 \"",
"certmanager.operator.openshift.io/cluster patched",
"oc get deployment -n cert-manager -o yaml",
"metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi",
"oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 1 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 2 webhookConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 3 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 4 cainjectorConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 5 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule\" 6",
"oc get pods -n cert-manager -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cert-manager-58d9c69db4-78mzp 1/1 Running 0 10m 10.129.0.36 ip-10-0-1-106.ec2.internal <none> <none> cert-manager-cainjector-85b6987c66-rhzf7 1/1 Running 0 11m 10.128.0.39 ip-10-0-1-136.ec2.internal <none> <none> cert-manager-webhook-7f54b4b858-29bsp 1/1 Running 0 11m 10.129.0.35 ip-10-0-1-106.ec2.internal <none> <none>",
"oc get deployments -n cert-manager -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}{.spec.template.spec.nodeSelector}{\"\\n\"}{.spec.template.spec.tolerations}{\"\\n\\n\"}{end}'",
"cert-manager {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-cainjector {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-webhook {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}]",
"oc get events -n cert-manager --field-selector reason=Scheduled",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager",
"oc create -f sample-credential-request.yaml",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"aws-creds\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: - mountPath: /.aws name: cloud-credentials volumes: - name: cloud-credentials secret: secretName: aws-creds",
"mkdir credentials-request",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager",
"ccoctl aws create-iam-roles --name <user_defined_name> --region=<aws_region> --credentials-requests-dir=<path_to_credrequests_dir> --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir>",
"2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds",
"oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s",
"oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list",
"pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager",
"oc create -f sample-credential-request.yaml",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: volumeMounts: - mountPath: /.config/gcloud name: cloud-credentials . volumes: - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials",
"mkdir credentials-request",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager",
"ccoctl gcp create-service-accounts --name <user_defined_name> --output-dir=<path_to_output_dir> --credentials-requests-dir=<path_to_credrequests_dir> --workload-identity-pool <workload_identity_pool> --workload-identity-provider <workload_identity_provider> --project <gcp_project_id>",
"ccoctl gcp create-service-accounts --name abcde-20230525-4bac2781 --output-dir=/home/outputdir --credentials-requests-dir=/home/credentials-requests --workload-identity-pool abcde-20230525-4bac2781 --workload-identity-provider abcde-20230525-4bac2781 --project openshift-gcp-devel",
"ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token - mountPath: /.config/gcloud name: cloud-credentials volumes: - name: bound-sa-token projected: sources: - serviceAccountToken: audience: openshift path: token - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials",
"apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme:",
"apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: ingressClassName: openshift-default 4",
"oc patch ingress/<ingress-name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}' -n <namespace>",
"oc create -f acme-cluster-issuer.yaml",
"apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1",
"oc create -f namespace.yaml",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 spec: ingressClassName: openshift-default 4 tls: - hosts: - <hostname> 5 secretName: sample-tls 6 rules: - host: <hostname> 7 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 8 port: number: 80",
"oc create -f ingress.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \\ 1 -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: \"aws-secret\" 9 key: \"awsSecretAccessKey\" 10",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project my-issuer-namespace",
"oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project my-issuer-namespace",
"oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \\ 1 2 3 -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud",
"oc create -f issuer.yaml",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - \"<domain_name>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n <issuer_namespace>",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-config spec: isCA: false commonName: \"api.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"api.<cluster_base_domain>\" 4 issuerRef: name: <issuer_name> 5 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n openshift-config",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-ingress spec: isCA: false commonName: \"apps.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"apps.<cluster_base_domain>\" 4 - \"*.apps.<cluster_base_domain>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n openshift-ingress",
"oc create route edge <route_name> \\ 1 --service=<service_name> \\ 2 --hostname=<hostname> \\ 3 --namespace=<namespace> 4",
"oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-acme namespace: <namespace> 1 spec: acme: server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-acme-account-key solvers: - http01: ingress: ingressClassName: openshift-default EOF",
"oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: example-route-cert namespace: <namespace> 1 spec: commonName: <hostname> 2 dnsNames: - <hostname> 3 usages: - server auth issuerRef: kind: Issuer name: letsencrypt-acme secretName: <secret_name> 4 EOF",
"oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret_name> \\ 1 --namespace=<namespace> 2",
"oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<namespace> 1",
"oc patch route <route_name> \\ 1 -n <namespace> \\ 2 --type=merge -p '{\"spec\":{\"tls\":{\"externalCertificate\":{\"name\":\"<secret_name>\"}}}}' 3",
"oc get certificate -n <namespace> 1 oc get secret -n <namespace> 2",
"curl -IsS https://<hostname> 1",
"curl -v https://<hostname> 1",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"UNSUPPORTED_ADDON_FEATURES\",\"value\":\"IstioCSR=true\"}]}}}'",
"oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator",
"deployment \"cert-manager-operator-controller-manager\" successfully rolled out",
"apiVersion: cert-manager.io/v1 kind: Issuer 1 metadata: name: selfsigned namespace: <istio_project_name> 2 spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: <istio_project_name> spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer 3 group: cert-manager.io --- kind: Issuer metadata: name: istio-ca namespace: <istio_project_name> 4 spec: ca: secretName: istio-ca",
"oc get issuer istio-ca -n <istio_project_name>",
"NAME READY AGE istio-ca True 3m",
"oc new-project <istio_csr_project_name>",
"apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: <istio_csr_project_name> spec: IstioCSRConfig: certManager: issuerRef: name: istio-ca 1 kind: Issuer 2 group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-system",
"oc create -f IstioCSR.yaml",
"oc get deployment -n <istio_csr_project_name>",
"NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-istio-csr 1/1 1 1 24s",
"oc get pod -n <istio_csr_project_name>",
"NAME READY STATUS RESTARTS AGE cert-manager-istio-csr-5c979f9b7c-bv57w 1/1 Running 0 45s",
"oc -n <istio_csr_project_name> logs <istio_csr_pod_name>",
"oc -n cert-manager-operator logs <cert_manager_operator_pod_name>",
"oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default",
"oc get clusterrolebindings,clusterroles -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\"",
"oc get certificate,deployments,services,serviceaccounts -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>",
"oc get roles,rolebindings -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>",
"oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>",
"oc label namespace cert-manager openshift.io/cluster-monitoring=true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager",
"oc create -f monitoring.yaml",
"{instance=\"<endpoint>\"} 1",
"{endpoint=\"tcp-prometheus-servicemonitor\"}",
"oc edit certmanager.operator cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager spec: logLevel: <log_level> 1",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"OPERATOR_LOG_LEVEL\",\"value\":\"v\"}]}}}' 1",
"oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container",
"deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9",
"oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator",
"oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"ad209ce1-fec7-4130-8192-c4cc63f1d8cd\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s\",\"verb\":\"update\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client\",\"uid\":\"dd4997e3-d565-4e37-80f8-7fc122ccd785\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-controller-manager\",\"system:authenticated\"]},\"sourceIPs\":[\"::1\"],\"userAgent\":\"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"configmaps\",\"namespace\":\"openshift-kube-controller-manager\",\"name\":\"cert-recovery-controller-lock\",\"uid\":\"5c57190b-6993-425d-8101-8337e48c7548\",\"apiVersion\":\"v1\",\"resourceVersion\":\"574307\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2020-04-02T08:27:20.200962Z\",\"stageTimestamp\":\"2020-04-02T08:27:20.206710Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:kube-controller-manager-recovery\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"localhost-recovery-client/openshift-kube-controller-manager\\\"\"}}",
"oc adm node-logs --role=master --path=openshift-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=openshift-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"381acf6d-5f30-4c7d-8175-c9c317ae5893\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/metrics\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"uid\":\"825b60a0-3976-4861-a342-3b2b561e8f82\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.129.2.6\"],\"userAgent\":\"Prometheus/2.23.0\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:02:04.086545Z\",\"stageTimestamp\":\"2021-03-08T18:02:04.107102Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"prometheus-k8s\\\" of ClusterRole \\\"prometheus-k8s\\\" to ServiceAccount \\\"prometheus-k8s/openshift-monitoring\\\"\"}}",
"oc adm node-logs --role=master --path=kube-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=kube-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"uid\":\"2574b041-f3c8-44e6-a057-baef7aa81516\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-scheduler-operator\",\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.8\"],\"userAgent\":\"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"serviceaccounts\",\"namespace\":\"openshift-kube-scheduler\",\"name\":\"openshift-kube-scheduler-sa\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:06:42.512619Z\",\"stageTimestamp\":\"2021-03-08T18:06:42.516145Z\",\"annotations\":{\"authentication.k8s.io/legacy-token\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:cluster-kube-scheduler-operator\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\\\"\"}}",
"oc adm node-logs --role=master --path=oauth-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=oauth-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/apis/user.openshift.io/v1/users/~\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.0.32.4\",\"10.128.0.1\"],\"userAgent\":\"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"users\",\"name\":\"~\",\"apiGroup\":\"user.openshift.io\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T17:47:43.653187Z\",\"stageTimestamp\":\"2021-03-08T17:47:43.660187Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"basic-users\\\" of ClusterRole \\\"basic-user\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm node-logs --role=master --path=oauth-server/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=oauth-server/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"13c20345-f33b-4b7d-b3b6-e7793f805621\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/login\",\"verb\":\"post\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.128.2.6\"],\"userAgent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0\",\"responseStatus\":{\"metadata\":{},\"code\":302},\"requestReceivedTimestamp\":\"2022-05-11T17:31:16.280155Z\",\"stageTimestamp\":\"2022-05-11T17:31:16.297083Z\",\"annotations\":{\"authentication.openshift.io/decision\":\"error\",\"authentication.openshift.io/username\":\"kubeadmin\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.user.username == \"myusername\")'",
"oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.userAgent == \"cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\")'",
"oc adm node-logs node-1.example.com --path=kube-apiserver/audit.log | jq 'select(.requestURI | startswith(\"/apis/apiextensions.k8s.io/v1beta1\")) | .userAgent'",
"oc adm node-logs node-1.example.com --path=oauth-apiserver/audit.log | jq 'select(.verb != \"get\")'",
"oc adm node-logs node-1.example.com --path=oauth-server/audit.log | jq 'select(.annotations[\"authentication.openshift.io/username\"] != null and .annotations[\"authentication.openshift.io/decision\"] == \"error\")'",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"oc edit apiserver cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"oc edit apiserver cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"oc edit apiserver cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: None",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"oc explain <component>.spec.tlsSecurityProfile.<profile> 1",
"oc explain apiserver.spec.tlsSecurityProfile.intermediate",
"KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2",
"oc explain <component>.spec.tlsSecurityProfile 1",
"oc explain ingresscontroller.spec.tlsSecurityProfile",
"KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old",
"oc edit APIServer cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe apiserver cluster",
"Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"oc describe etcd cluster",
"Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12",
"oc logs machine-config-server-5msdv -n openshift-machine-config-operator",
"I0905 13:48:36.968688 1 start.go:51] Launching server with tls min version: VersionTLS12 & cipher suites [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #",
"oc create -f <filename>",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/kubernetes/kubelet.conf",
"\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #",
"oc get pods -n <namespace>",
"oc get pods -n workshop",
"NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s",
"oc get pod parksmap-1-4xkwf -n workshop -o yaml",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2",
"oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json",
"seccompProfiles: - localhost/<custom-name>.json 1",
"spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1",
"oc edit apiserver.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1",
"oc edit apiserver",
"spec: encryption: type: aesgcm 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: routes.route.openshift.io",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: secrets, configmaps",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io",
"oc edit apiserver",
"spec: encryption: type: identity 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators",
"oc get packagemanifests container-security-operator -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{\"\\n\"}{end}' | awk '{print \"STARTING_CSV=\" USD1 \" CHANNEL=\" USD2 }' | sort -Vr | head -1",
"STARTING_CSV=container-security-operator.v3.8.9 CHANNEL=stable-3.8",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: channel: USD{CHANNEL} 1 installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: USD{STARTING_CSV} 2",
"oc apply -f container-security-operator.yaml",
"subscription.operators.coreos.com/container-security-operator created",
"oc get vuln --all-namespaces",
"NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s",
"oc describe vuln --namespace mynamespace sha256.ac50e3752",
"Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries",
"oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com",
"customresourcedefinition.apiextensions.k8s.io \"imagemanifestvulns.secscan.quay.redhat.com\" deleted",
"echo plaintext | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' -y >/tmp/encrypted.oldkey",
"clevis decrypt </tmp/encrypted.oldkey",
"tang-show-keys 7500",
"36AHjNH3NZDSnlONLz1-V4ie6t8",
"cd /var/db/tang/",
"ls -A1",
"36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk",
"for key in *.jwk; do mv -- \"USDkey\" \".USDkey\"; done",
"/usr/libexec/tangd-keygen /var/db/tang",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encrypted.oldkey",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi9/ubi-minimal:latest imagePullPolicy: IfNotPresent command: - \"/sbin/chroot\" - \"/host\" - \"/bin/bash\" - \"-ec\" args: - | rm -f /tmp/rekey-complete || true echo \"Current tang pin:\" clevis-luks-list -d USDROOT_DEV -s 1 echo \"Applying new tang pin: USDNEW_TANG_PIN\" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c \"USDNEW_TANG_PIN\" echo \"Pin applied successfully\" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon",
"oc apply -f tang-rekey.yaml",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s",
"oc get -n openshift-machine-config-operator ds tang-rekey",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver02:7500\",\"thp\":\"badthumbprint\"}' | clevis decrypt",
"Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'!",
"echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver03:7500\",\"thp\":\"goodthumbprint\"}' | clevis decrypt",
"okay",
"oc get pods -A | grep tang-rekey",
"openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m",
"oc logs tang-rekey-7ks6h",
"Current tang pin: 1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://10.46.55.192:7500\"},{\"url\":\"http://10.46.55.192:7501\"},{\"url\":\"http://10.46.55.192:7502\"}]}}' Applying new tang pin: {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} Updating binding Binding edited successfully Pin applied successfully",
"cd /var/db/tang/",
"ls -A1",
".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"rm .*.jwk",
"ls -A1",
"Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk",
"tang-show-keys 7500",
"WOjQYkyK7DxY_T5pMncMO5w0f6E",
"clevis decrypt </tmp/encryptValidation",
"Error communicating with the server!",
"sudo clevis luks pass -d /dev/vda2 -s 1",
"sudo clevis luks regen -d /dev/vda2 -s 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/security_and_compliance/index |
34.2. At and Batch | 34.2. At and Batch While cron is used to schedule recurring tasks, the at command is used to schedule a one-time task at a specific time and the batch command is used to schedule a one-time task to be executed when the systems load average drops below 0.8. To use at or batch , the at RPM package must be installed, and the atd service must be running. To determine if the package is installed, use the rpm -q at command. To determine if the service is running, use the command /sbin/service atd status . 34.2.1. Configuring At Jobs To schedule a one-time job at a specific time, type the command at time , where time is the time to execute the command. The argument time can be one of the following: HH:MM format - For example, 04:00 specifies 4:00 a.m. If the time is already past, it is executed at the specified time the day. midnight - Specifies 12:00 a.m. noon - Specifies 12:00 p.m. teatime - Specifies 4:00 p.m. month-name day year format - For example, January 15 2002 specifies the 15th day of January in the year 2002. The year is optional. MMDDYY, MM/DD/YY, or MM.DD.YY formats - For example, 011502 for the 15th day of January in the year 2002. now + time - time is in minutes, hours, days, or weeks. For example, now + 5 days specifies that the command should be executed at the same time five days from now. The time must be specified first, followed by the optional date. For more information about the time format, read the /usr/share/doc/at- <version> /timespec text file. After typing the at command with the time argument, the at> prompt is displayed. Type the command to execute, press Enter , and type Ctrl + D . Multiple commands can be specified by typing each command followed by the Enter key. After typing all the commands, press Enter to go to a blank line and type Ctrl + D . Alternatively, a shell script can be entered at the prompt, pressing Enter after each line in the script, and typing Ctrl + D on a blank line to exit. If a script is entered, the shell used is the shell set in the user's SHELL environment, the user's login shell, or /bin/sh (whichever is found first). If the set of commands or script tries to display information to standard out, the output is emailed to the user. Use the command atq to view pending jobs. Refer to Section 34.2.3, "Viewing Pending Jobs" for more information. Usage of the at command can be restricted. For more information, refer to Section 34.2.5, "Controlling Access to At and Batch" for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/automated_tasks-at_and_batch |
2.7. Hardware Considerations | 2.7. Hardware Considerations You should take the following hardware considerations into account when deploying a GFS2 file system. Use Higher Quality Storage Options GFS2 can operate on cheaper shared storage options, such as iSCSI or Fibre Channel over Ethernet (FCoE), but you will get better performance if you buy higher quality storage with larger caching capacity. Red Hat performs most quality, sanity, and performance tests on SAN storage with Fibre Channel interconnect. As a general rule, it is always better to deploy something that has been tested first. Test Network Equipment Before Deploying Higher quality, faster network equipment makes cluster communications and GFS2 run faster with better reliability. However, you do not have to purchase the most expensive hardware. Some of the most expensive network switches have problems passing multicast packets, which are used for passing fcntl locks (flocks), whereas cheaper commodity network switches are sometimes faster and more reliable. Red Hat recommends trying equipment before deploying it into full production. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-hardware-gfs2 |
Chapter 34. ExternalLogging schema reference | Chapter 34. ExternalLogging schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the ExternalLogging type from InlineLogging . It must have the value external for the type ExternalLogging . Property Property type Description type string Must be external . valueFrom ExternalConfigurationReference ConfigMap entry where the logging configuration is stored. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-externallogging-reference |
4.6. Running Red Hat JBoss Data Grid in Karaf (OSGi) | 4.6. Running Red Hat JBoss Data Grid in Karaf (OSGi) Apache Karaf is a powerful, lightweight OSGi-based runtime container into which components and applications are deployed. OSGi implements a dynamic component model that does not exist in standalone JVM environments. OSGi containers such as Karaf include a rich set of tools for managing the life cycle of an application. All dependencies between individual modules, including version numbers, must be explicitly specified. Where more than one class of the same name exists, the strict rules of OSGi specify which of the classes will be used by your bundle. Report a bug 4.6.1. Running a Deployment of JBoss Data Grid in Karaf (Remote Client-Server) The Red Hat JBoss Data Grid Hot Rod client can be run in an OSGi-based container such as Karaf. Use the path in the JBoss Data Grid Maven repository to set up Karaf. Additionally, JBoss Data Grid requires a features file, located in org/infinispan/infinispan-remote/ USD{VERSION} . This file lists all dependencies for the Hot Rod client in OSGi, while also making it simpler to install the feature into Karaf (version 2.3.3 or 3.0). Report a bug 4.6.2. Installing the Hot Rod client feature in Karaf Red Hat JBoss Data Grid's Hot Rod feature is installed in Karaf as follows: Prerequisite Configure the Red Hat JBoss Data Grid Maven Repository. Procedure 4.3. Install the Hot Rod Feature in Karaf Karaf 2.3.3 For Karaf 2.3.3 use the following commands: Verify that the feature was successfully installed as follows: Karaf 3.0.0 For Karaf use the following commands. Verify that the feature was successfully installed: Alternatively, use the -i command parameter to install the Hot Rod Client feature using the following: Report a bug 4.6.3. Installing Red Hat JBoss Data Grid in Karaf (Library Mode) The Red Hat JBoss Data Grid JAR files contain the required OSGi manifest headers and are used inside OSGi runtime environments as OSGi bundles. Additionally, the required third-party dependencies must be installed. These can be installed individually, or altogether via the features file, which defines all required dependencies. To install bundles using the features file: Register the feature repositories inside Karaf. Install the features contained in the repositories. Procedure 4.4. Installing bundles using the features file Start the Karaf console Start the Karaf console using the following commands: Register a feature repository Register a feature repository as follows: For Karaf 2.3.3: For Karaf 3.0.0: Result JBoss Data Grid runs in library mode using Karaf. The URL for feature repositories is constructed from the Maven artifact coordinates using the following format: Important The JPA Cache Store is not supported in Apache Karaf in JBoss Data Grid 6.6. Important Querying in Library mode (which is covered in the Infinispan Query Guide) is not supported in Apache Karaf in JBoss Data Grid 6.6. Report a bug | [
"karaf@root> features:addUrl mvn:org.infinispan/infinispan-remote/ USD{VERSION} /xml/features",
"karaf@root> features:install infinispan-remote",
"karaf@root> features:list //output",
"karaf@root> feature:repo-add mvn:org.infinispan/infinispan-remote/ USD{VERSION} /xml/features",
"karaf@root> feature:install infinispan-remote",
"karaf@root> feature:list",
"karaf@root()> feature:repo-add -i mvn:org.infinispan/infinispan-remote/ USD{VERSION} /xml/features",
"cd USDAPACHE_KARAF_HOME /bin ./karaf",
"karaf@root()> features:addUrl mvn:org.infinispan/infinispan-embedded/ USD{VERSION} /xml/features",
"karaf@root> features:install infinispan-embedded",
"karaf@root()> feature:repo-add mvn:org.infinispan/infinispan-embedded/ USD{VERSION} /xml/features",
"karaf@root> feature:install infinispan-embedded",
"mvn:<groupId>/<artifactId>/<version>/xml/features"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-running_red_hat_jboss_data_grid_in_karaf_osgi |
1.12. Additional Data Source Connection Properties | 1.12. Additional Data Source Connection Properties When using the driver class, various properties are derived from the URL. For the data source class, these properties are set using the following additional methods: Table 1.3. Data Source Connection Properties Method Name Type Description setAlternateServers String Optional delimited list of host:port entries. Refer to Section 2.1, "Using Multiple Hosts" for more information. setAdditionalProperties String Optional setting of properties that has the same format as the property string in a driver connection URL. Refer to Section 1.8, "Driver Connection URL Format" setDatabaseName String The name of a virtual database (VDB) deployed to JBoss Data Virtualization. Important VDB names can contain version information; for example, myvdb.2 . If such a name is used in the URL, this has the same effect as supplying a version=2 connection property. Note that if the VDB name contains version information, you cannot also use the version property in the same request. setDatabaseVersion String The VDB version. setDataSourceName String The name given to this data source setPortNumber int The port number on which the server process is listening. setServerName String The server hostname where the JBoss Data Virtualization runtime is installed. setSecure boolean Secure connection. Flag to indicate to use SSL (mms) based connection between client and server. Note All of the URL Connection Properties can be used on the data source. To do so, use the AdditionalProperties setter method if the corresponding setter method is not already available. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/Additional_Data_Source_Connection_Properties |
28.2. GOOGLE_PING Configuration | 28.2. GOOGLE_PING Configuration Red Hat JBoss Data Grid works with Google Compute Engine in the following way: In Library mode, use the JGroups' configuration file default-configs/default-jgroups-google.xml or use the GOOGLE_PING protocol in an existing configuration file. In Remote Client-Server mode, define the properties on the command line when you start the server to use the JGroups Google stack ( see example in Section 28.2.1, "Starting the Server in Google Compute Engine" ). To configure the GOOGLE_PING protocol to work in Google Compute Engine in Library and Remote Client-Server mode: Use JGroups bucket. These buckets use Google Compute Engine credentials. Use the access key. Use the secret access key. Note Only the TCP protocol is supported in Google Compute Engine since multicasts are not allowed. Report a bug 28.2.1. Starting the Server in Google Compute Engine This configuration requires access to a bucket that can only be accessed with the appropriate Google Compute Engine credentials. Ensure that the GOOGLE_PING configuration includes the following properties: the access_key and the secret_access_key properties for the Google Compute Engine user. Example 28.1. Start the Red Hat JBoss Data Grid Server with a Bucket Run the following command from the top level of the server directory to start the Red Hat JBoss Data Grid server using a bucket: Replace {server_ip_address} with the server's IP address. Replace {google_bucket_name} with the appropriate bucket name. Replace {access_key} with the user's access key. Replace {secret_access_key} with the user's secret access key. Report a bug | [
"bin/clustered.sh -Djboss.bind.address= {server_ip_address} -Djboss.bind.address.management= {server_ip_address} -Djboss.default.jgroups.stack=google -Djgroups.google.bucket= {google_bucket_name} -Djgroups.google.access_key= {access_key} -Djgroups.google.secret_access_key= {secret_access_key}"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-google_ping_configuration |
Chapter 79. Next steps | Chapter 79. steps Packaging and deploying an Red Hat Process Automation Manager project | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/next_steps_6 |
Chapter 1. Release notes | Chapter 1. Release notes 1.1. Logging 5.9 Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.1.1. Logging 5.9.12 This release includes RHSA-2025:1985 . 1.1.1.1. CVEs CVE-2020-11023 CVE-2022-49043 CVE-2024-12797 CVE-2025-25184 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.2. Logging 5.9.11 This release includes RHSA-2025:1227 . 1.1.2.1. Enhancements This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. ( LOG-6581 ) 1.1.2.2. Bug Fixes Before this update, the collector container mounted all log sources. With this update, it mounts only the defined input sources. ( LOG-5691 ) Before this update, fluentd ignored the no_proxy setting when using the HTTP output. With this update, the no_proxy setting is picked up correctly. ( LOG-6586 ) Before this update, clicking on "more logs" from the pod detail view triggered a false permission error due to a missing namespace parameter required for authorization. With this update, clicking "more logs" includes the namespace parameter, preventing the permission error and allowing access to more logs. ( LOG-6645 ) Before this update, specifying syslog.addLogSource added namespace_name , container_name , and pod_name to the messages of non-container logs. With this update, only container logs will include namespace_name , container_name , and pod_name in their messages when syslog.addLogSource is set. ( LOG-6656 ) 1.1.2.3. CVEs CVE-2024-12085 CVE-2024-47220 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.3. Logging 5.9.10 This release includes RHSA-2024:10990 . 1.1.3.1. Bug Fixes Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default , kube , openshift , and namespaces that begin with openshift- or kube- . ( LOG-6044 ) Before this update, Loki attempted to detect the level of log messages, which caused confusion when the collector also detected log levels and produced different results. With this update, automatic log level detection in Loki is disabled. ( LOG-6321 ) Before this update, when the ClusterLogForwarder custom resource defined tls.insecureSkipVerify: true in combination with type: http and an HTTP URL, the certificate validation was not skipped. This misconfiguration caused the collector to fail because it attempted to validate certificates despite the setting. With this update, when tls.insecureSkipVerify: true is set, the URL is checked for the HTTPS. An HTTP URL will cause a misconfiguration error. ( LOG-6376 ) Before this update, when any infrastructure namespaces were specified in the application inputs in the ClusterLogForwarder custom resource, logs were generated with the incorrect log_type: application tags. With this update, when any infrastructure namespaces are specified in the application inputs, logs are generated with the correct log_type: infrastructure tags. ( LOG-6377 ) Important When updating to Logging for Red Hat OpenShift 5.9.10, if you previously added any infrastructure namespaces in the application inputs in the ClusterLogForwarder custom resource, you must add the permissions for collecting logs from infrastructure namespaces. For more details, see "Setting up log collection". 1.1.3.2. CVEs CVE-2024-2236 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 CVE-2024-55565 1.1.4. Logging 5.9.9 This release includes RHBA-2024:10049 . 1.1.4.1. Bug fixes Before this update, upgrades to version 6.0 failed with errors if a Log File Metric Exporter instance was present. This update fixes the issue, enabling upgrades to proceed smoothly without errors. ( LOG-6201 ) Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. ( LOG-6293 ) 1.1.4.2. CVEs CVE-2024-6119 1.1.5. Logging 5.9.8 This release includes OpenShift Logging Bug Fix Release 5.9.8 . 1.1.5.1. Bug fixes Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. ( LOG-6181 ) Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. ( LOG-6183 ) Before this update, an LF character in the vector.toml file under the ES authentication configuration caused the collector pods to crash. This update removes the newline characters from the username and password fields, resolving the issue. ( LOG-6206 ) Before this update, it was possible to set the .containerLimit.maxRecordsPerSecond parameter in the ClusterLogForwarder custom resource to 0 , which could lead to an exception during Vector's startup. With this update, the configuration is validated before being applied, and any invalid values (less than or equal to zero) are rejected. ( LOG-6214 ) 1.1.5.2. CVEs ( CVE-2024-24791 ) ( CVE-2024-34155 ) ( CVE-2024-34156 ) ( CVE-2024-34158 ) ( CVE-2024-6119 ( CVE-2024-45490 ( CVE-2024-45491 ( CVE-2024-45492 1.1.6. Logging 5.9.7 This release includes OpenShift Logging Bug Fix Release 5.9.7 . 1.1.6.1. Bug fixes Before this update, the clusterlogforwarder.spec.outputs.http.timeout parameter was not applied to the Fluentd configuration when Fluentd was used as the collector type, causing HTTP timeouts to be misconfigured. With this update, the clusterlogforwarder.spec.outputs.http.timeout parameter is now correctly applied, ensuring Fluentd honors the specified timeout and handles HTTP connections according to the user's configuration. ( LOG-6125 ) Before this update, the TLS section was added without verifying the broker URL schema, resulting in SSL connection errors if the URLs did not start with tls . With this update, the TLS section is now added only if the broker URLs start with tls , preventing SSL connection errors. ( LOG-6041 ) 1.1.6.2. CVEs CVE-2024-6104 CVE-2024-6119 CVE-2024-34397 CVE-2024-45296 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 CVE-2024-45801 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.7. Logging 5.9.6 This release includes OpenShift Logging Bug Fix Release 5.9.6 . 1.1.7.1. Bug fixes Before this update, the collector deployment ignored secret changes, causing receivers to reject logs. With this update, the system rolls out a new pod when there is a change in the secret value, ensuring that the collector reloads the updated secrets. ( LOG-5525 ) Before this update, the Vector could not correctly parse field values that included a single dollar sign ( USD ). With this update, field values with a single dollar sign are automatically changed to two dollar signs ( USDUSD ), ensuring proper parsing by the Vector. ( LOG-5602 ) Before this update, the drop filter could not handle non-string values (e.g., .responseStatus.code: 403 ). With this update, the drop filter now works properly with these values. ( LOG-5815 ) Before this update, the collector used the default settings to collect audit logs, without handling the backload from output receivers. With this update, the process for collecting audit logs has been improved to better manage file handling and log reading efficiency. ( LOG-5866 ) Before this update, the must-gather tool failed on clusters with non-AMD64 architectures such as Azure Resource Manager (ARM) or PowerPC. With this update, the tool now detects the cluster architecture at runtime and uses architecture-independent paths and dependencies. The detection allows must-gather to run smoothly on platforms like ARM and PowerPC. ( LOG-5997 ) Before this update, the log level was set using a mix of structured and unstructured keywords that were unclear. With this update, the log level follows a clear, documented order, starting with structured keywords. ( LOG-6016 ) Before this update, multiple unnamed pipelines writing to the default output in the ClusterLogForwarder caused a validation error due to duplicate auto-generated names. With this update, the pipeline names are now generated without duplicates. ( LOG-6033 ) Before this update, the collector pods did not have the PreferredScheduling annotation. With this update, the PreferredScheduling annotation is added to the collector daemonset. ( LOG-6023 ) 1.1.7.2. CVEs CVE-2024-0286 CVE-2024-2398 CVE-2024-37370 CVE-2024-37371 1.1.8. Logging 5.9.5 This release includes OpenShift Logging Bug Fix Release 5.9.5 1.1.8.1. Bug Fixes Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. ( LOG-5855 ) Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. ( LOG-5895 ) Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. ( LOG-5945 ) 1.1.8.2. CVEs None. 1.1.9. Logging 5.9.4 This release includes OpenShift Logging Bug Fix Release 5.9.4 1.1.9.1. Bug Fixes Before this update, an incorrectly formatted timeout configuration caused the OCP plugin to crash. With this update, a validation prevents the crash and informs the user about the incorrect configuration. ( LOG-5373 ) Before this update, workloads with labels containing - caused an error in the collector when normalizing log entries. With this update, the configuration change ensures the collector uses the correct syntax. ( LOG-5524 ) Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. ( LOG-5697 ) Before this update, the Loki Operator would crash if the CredentialRequest specification was registered in an environment without the cloud-credentials-operator . With this update, the CredentialRequest specification only registers in environments that are cloud-credentials-operator enabled. ( LOG-5701 ) Before this update, the Logging Operator watched and processed all config maps across the cluster. With this update, the dashboard controller only watches the config map for the logging dashboard. ( LOG-5702 ) Before this update, the ClusterLogForwarder introduced an extra space in the message payload which did not follow the RFC3164 specification. With this update, the extra space has been removed, fixing the issue. ( LOG-5707 ) Before this update, removing the seeding for grafana-dashboard-cluster-logging as a part of ( LOG-5308 ) broke new greenfield deployments without dashboards. With this update, the Logging Operator seeds the dashboard at the beginning and continues to update it for changes. ( LOG-5747 ) Before this update, LokiStack was missing a route for the Volume API causing the following error: 404 not found . With this update, LokiStack exposes the Volume API, resolving the issue. ( LOG-5749 ) 1.1.9.2. CVEs CVE-2024-24790 1.1.10. Logging 5.9.3 This release includes OpenShift Logging Bug Fix Release 5.9.3 1.1.10.1. Bug Fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5614 ) Before this update, monitoring the Vector collector output buffer state was not possible. With this update, monitoring and alerting the Vector collector output buffer size is possible that improves observability capabilities and helps keep the system running optimally. ( LOG-5586 ) 1.1.10.2. CVEs CVE-2024-2961 CVE-2024-28182 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.1.11. Logging 5.9.2 This release includes OpenShift Logging Bug Fix Release 5.9.2 1.1.11.1. Bug Fixes Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. ( LOG-4910 ) Before this update, the rotated infrastructure log files were sent to the application index in some scenarios due to an incorrect configuration in the Vector log collector. With this update, the Vector log collector configuration avoids collecting any rotated infrastructure log files. ( LOG-5156 ) Before this update, the Logging Operator did not monitor changes to the grafana-dashboard-cluster-logging config map. With this update, the Logging Operator monitors changes in the ConfigMap objects, ensuring the system stays synchronized and responds effectively to config map modifications. ( LOG-5308 ) Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5426 ) Before this change, the Fluentd out_http plugin ignored the no_proxy environment variable. With this update, the Fluentd patches the HTTP#start method of ruby to honor the no_proxy environment variable. ( LOG-5466 ) 1.1.11.2. CVEs CVE-2022-48554 CVE-2023-2975 CVE-2023-3446 CVE-2023-3817 CVE-2023-5678 CVE-2023-6129 CVE-2023-6237 CVE-2023-7008 CVE-2023-45288 CVE-2024-0727 CVE-2024-22365 CVE-2024-25062 CVE-2024-28834 CVE-2024-28835 1.1.12. Logging 5.9.1 This release includes OpenShift Logging Bug Fix Release 5.9.1 1.1.12.1. Enhancements Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5401 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5395 ) 1.1.12.2. Bug Fixes Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. ( LOG-5268 ) Before this update, a prune filter without a defined pruneFilterSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5322 ) Before this update, a drop filter without a defined dropTestsSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5323 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5397 ) Before this update, poorly formatted timestamp fields in audit log records led to WARN messages in Red Hat OpenShift Logging Operator logs. With this update, a remap transformation ensures that the timestamp field is properly formatted. ( LOG-4672 ) Before this update, the error message thrown while validating a ClusterLogForwarder resource name and namespace did not correspond to the correct error. With this update, the system checks if a ClusterLogForwarder resource with the same name exists in the same namespace. If not, it corresponds to the correct error. ( LOG-5062 ) Before this update, the validation feature for output config required a TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message are more informative. ( LOG-5307 ) Before this update, defining an infrastructure input type did not exclude logging workloads from the collection. With this update, the collection excludes logging services to avoid feedback loops. ( LOG-5309 ) 1.1.12.3. CVEs No CVEs. 1.1.13. Logging 5.9.0 This release includes OpenShift Logging Bug Fix Release 5.9.0 1.1.13.1. Removal notice The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. Instances of OpenShift Elasticsearch Operator from prior logging releases, remain supported until the EOL of the logging release. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 1.1.13.2. Deprecation notice In Logging 5.9, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. In Logging 5.9, the Fields option for the Splunk output type was never implemented and is now deprecated. It will be removed in a future release. 1.1.13.3. Enhancements 1.1.13.3.1. Log Collection This enhancement adds the ability to refine the process of log collection by using a workload's metadata to drop or prune logs based on their content. Additionally, it allows the collection of infrastructure logs, such as journal or container logs, and audit logs, such as kube api or ovn logs, to only collect individual sources. ( LOG-2155 ) This enhancement introduces a new type of remote log receiver, the syslog receiver. You can configure it to expose a port over a network, allowing external systems to send syslog logs using compatible tools such as rsyslog. ( LOG-3527 ) With this update, the ClusterLogForwarder API now supports log forwarding to Azure Monitor Logs, giving users better monitoring abilities. This feature helps users to maintain optimal system performance and streamline the log analysis processes in Azure Monitor, which speeds up issue resolution and improves operational efficiency. ( LOG-4605 ) This enhancement improves collector resource utilization by deploying collectors as a deployment with two replicas. This occurs when the only input source defined in the ClusterLogForwarder custom resource (CR) is a receiver input instead of using a daemon set on all nodes. Additionally, collectors deployed in this manner do not mount the host file system. To use this enhancement, you need to annotate the ClusterLogForwarder CR with the logging.openshift.io/dev-preview-enable-collector-as-deployment annotation. ( LOG-4779 ) This enhancement introduces the capability for custom tenant configuration across all supported outputs, facilitating the organization of log records in a logical manner. However, it does not permit custom tenant configuration for logging managed storage. ( LOG-4843 ) With this update, the ClusterLogForwarder CR that specifies an application input with one or more infrastructure namespaces like default , openshift* , or kube* , now requires a service account with the collect-infrastructure-logs role. ( LOG-4943 ) This enhancement introduces the capability for tuning some output settings, such as compression, retry duration, and maximum payloads, to match the characteristics of the receiver. Additionally, this feature includes a delivery mode to allow administrators to choose between throughput and log durability. For example, the AtLeastOnce option configures minimal disk buffering of collected logs so that the collector can deliver those logs after a restart. ( LOG-5026 ) This enhancement adds three new Prometheus alerts, warning users about the deprecation of Elasticsearch, Fluentd, and Kibana. ( LOG-5055 ) 1.1.13.3.2. Log Storage This enhancement in LokiStack improves support for OTEL by using the new V13 object storage format and enabling automatic stream sharding by default. This also prepares the collector for future enhancements and configurations. ( LOG-4538 ) This enhancement introduces support for short-lived token workload identity federation with Azure and AWS log stores for STS enabled OpenShift Container Platform 4.14 and later clusters. Local storage requires the addition of a CredentialMode: static annotation under spec.storage.secret in the LokiStack CR. ( LOG-4540 ) With this update, the validation of the Azure storage secret is now extended to give early warning for certain error conditions. ( LOG-4571 ) With this update, Loki now adds upstream and downstream support for GCP workload identity federation mechanism. This allows authenticated and authorized access to the corresponding object storage services. ( LOG-4754 ) 1.1.13.4. Bug Fixes Before this update, the logging must-gather could not collect any logs on a FIPS-enabled cluster. With this update, a new oc client is available in cluster-logging-rhel9-operator , and must-gather works properly on FIPS clusters. ( LOG-4403 ) Before this update, the LokiStack ruler pods could not format the IPv6 pod IP in HTTP URLs used for cross-pod communication. This issue caused querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the problem. Now, querying rules and alerts through the Prometheus-compatible API works just like in IPv4 environments. ( LOG-4709 ) Before this fix, the YAML content from the logging must-gather was exported in a single line, making it unreadable. With this update, the YAML white spaces are preserved, ensuring that the file is properly formatted. ( LOG-4792 ) Before this update, when the ClusterLogForwarder CR was enabled, the Red Hat OpenShift Logging Operator could run into a nil pointer exception when ClusterLogging.Spec.Collection was nil. With this update, the issue is now resolved in the Red Hat OpenShift Logging Operator. ( LOG-5006 ) Before this update, in specific corner cases, replacing the ClusterLogForwarder CR status field caused the resourceVersion to constantly update due to changing timestamps in Status conditions. This condition led to an infinite reconciliation loop. With this update, all status conditions synchronize, so that timestamps remain unchanged if conditions stay the same. ( LOG-5007 ) Before this update, there was an internal buffering behavior to drop_newest to address high memory consumption by the collector resulting in significant log loss. With this update, the behavior reverts to using the collector defaults. ( LOG-5123 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5165 ) Before this update, the configuration of the Loki Operator ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5212 ) 1.1.13.5. Known Issues None. 1.1.13.6. CVEs CVE-2023-5363 CVE-2023-5981 CVE-2023-46218 CVE-2024-0553 CVE-2023-0567 | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/release-notes |
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation | Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation To install Red Hat Virtualization Manager on a system that does not have a direct connection to the Content Delivery Network, download the required packages on a system that has Internet access, then create a repository that can be shared with the offline Manager machine. The system hosting the repository must be connected to the same network as the client systems where the packages are to be installed. Prerequisites A Red Hat Enterprise Linux 7 Server installed on a system that has access to the Content Delivery Network. This system downloads all the required packages, and distributes them to your offline system(s). A large amount of free disk space available. This procedure downloads a large number of packages, and requires up to 50GB of free disk space. Enable the Red Hat Virtualization Manager repositories on the online system: Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Configuring the Offline Repository Servers that are not connected to the Internet can access software repositories on other systems using File Transfer Protocol (FTP). To create the FTP repository, install and configure vsftpd : Install the vsftpd package: Start the vsftpd service, and ensure the service starts on boot: Create a sub-directory inside the /var/ftp/pub/ directory. This is where the downloaded packages will be made available: Download packages from all configured software repositories to the rhvrepo directory. This includes repositories for all Content Delivery Network subscription pools attached to the system, and any locally configured repositories: This command downloads a large number of packages, and takes a long time to complete. The -l option enables yum plug-in support. Install the createrepo package: Create repository metadata for each of the sub-directories where packages were downloaded under /var/ftp/pub/rhvrepo : Create a repository file, and copy it to the /etc/yum.repos.d/ directory on the offline machine on which you will install the Manager. The configuration file can be created manually or with a script. Run the script below on the system hosting the repository, replacing ADDRESS in the baseurl with the IP address or FQDN of the system hosting the repository: #!/bin/sh REPOFILE="/etc/yum.repos.d/rhev.repo" echo -e " " > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e "[USD(basename USDDIR)]" >> USDREPOFILE echo -e "name=USD(basename USDDIR)" >> USDREPOFILE echo -e "baseurl=ftp://_ADDRESS_/pub/rhvrepo/`basename USDDIR`" >> USDREPOFILE echo -e "enabled=1" >> USDREPOFILE echo -e "gpgcheck=0" >> USDREPOFILE echo -e "\n" >> USDREPOFILE done Return to Section 3.3, "Installing and Configuring the Red Hat Virtualization Manager" . Packages are installed from the local repository, instead of from the Content Delivery Network. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum install vsftpd",
"systemctl start vsftpd.service systemctl enable vsftpd.service",
"mkdir /var/ftp/pub/rhvrepo",
"reposync -l -p /var/ftp/pub/rhvrepo",
"yum install createrepo",
"for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do createrepo USDDIR; done",
"#!/bin/sh REPOFILE=\"/etc/yum.repos.d/rhev.repo\" echo -e \" \" > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e \"[USD(basename USDDIR)]\" >> USDREPOFILE echo -e \"name=USD(basename USDDIR)\" >> USDREPOFILE echo -e \"baseurl=ftp://_ADDRESS_/pub/rhvrepo/`basename USDDIR`\" >> USDREPOFILE echo -e \"enabled=1\" >> USDREPOFILE echo -e \"gpgcheck=0\" >> USDREPOFILE echo -e \"\\n\" >> USDREPOFILE done"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Configuring_an_Offline_Repository_for_Red_Hat_Virtualization_Manager_Installation_SM_localDB_deploy |
12.9. Creating Custom Schema Files | 12.9. Creating Custom Schema Files Schema files are simple LDIF files which define the cn=schema entry. Each attribute and object class is added as an attribute to that entry. Here are the requirements for creating a schema file: The first line must be dn: cn=schema . The schema file can include both attributes and object classes, but it can also include only one or the other. If both attributes and object classes are defined in the style, all of the attributes must be listed in the file first, then the object classes. The object classes can use attributes defined in other schema files. The file must be named in the format [1-9][0-9]text.ldif . The file must always begin with two numbers. Numerically, the schema file cannot be loaded before the core configuration schema (which begin with 00 and 01 ). Also, the Directory Server always writes its custom schema to the numerically and alphabetically highest named schema file in the schema directory. It expects this file to be 99user.ldif . If this file is not 99user.ldif , the server can experience problems. So, always make sure custom schema files are at least alphabetically lower than 99user.ldif . The name 99alpha.ldif is okay; the name 99zzz.ldif is not. Practices for creating schema files are described in more detail in the Deployment Guide . Attributes are defined in the schema file as attributetypes attributes to the schema, with five components: An OID, usually a dot-separated number A unique name, in the form NAME name A description, in the form DESC description The OID for the syntax of the attribute values, discussed in Section 12.1.3.1, "Directory Server Attribute Syntaxes" , in the form SYNTAX OID Optionally, the source where the attribute is defined For example: Likewise, object classes are defined as values of objectclasses attributes, although there is slightly more flexibility in how the object class is defined. The only required configurations are the name and OID for the object class; all other configuration depends on the needs for the object class: An OID, usually a dot-separated number A unique name, in the form NAME name A description, in the form DESC description The superior, or parent, object class for this object class, in the form SUP object_class ; if there is no related parent, use SUP top The word AUXILIARY , which gives the type of entry to which the object class applies; AUXILIARY means it can apply to any entry A list of required attributes, preceded by the word MUST ; to include multiple attributes, enclose the group in parentheses and separate with attributes with dollar signs (USD) A list of allowed attributes, preceded by the word MAY ; to include multiple attributes, enclose the group in parentheses and separate with attributes with dollar signs (USD) For example: Example 12.4, "Example Schema File" shows a simplified schema file. Example 12.4. Example Schema File Custom schema files should be added to the Directory Server instance's schema directory, /etc/dirsrv/slapd- instance /schema . The schema in these files are not loaded and available to the server unless the server is restarted or a dynamic reload task is run. Important If you want to use a standard schema from the /usr/share/data/ directory, copy the schema file to the /usr/share/dirsrv/schema/ directory. If you require that a standard schema is only available to a specific instance, copy the schema file to the /etc/dirsrv/slapd- instance_name /schema/ directory, but use a different file name in the destination directory. Otherwise, Directory Server renames the file during an upgrade and appends the .bak suffix. | [
"attributetypes: ( 1.2.3.4.5.6.1 NAME 'dateofbirth' DESC 'For employee birthdays' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUED X-ORIGIN 'Example defined')",
"objectclasses: ( 2.16.840.1133730.2.123 NAME 'examplePerson' DESC 'Example Person Object Class' SUP inetOrgPerson AUXILIARY MUST cn MAY (exampleDateOfBirth USD examplePreferredOS) )",
"dn: cn=schema attributetypes: ( 2.16.840.1133730.1.123 NAME 'dateofbirth' DESC 'For employee birthdays' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Example defined') objectclasses: ( 2.16.840.1133730.2.123 NAME 'examplePerson' DESC 'Example Person Object Class' SUP inetOrgPerson AUXILIARY MAY (dateofbirth) )"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/custom-schema-files |
Chapter 6. EgressFirewall [k8s.ovn.org/v1] | Chapter 6. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressFirewall. status object Observed status of EgressFirewall 6.1.1. .spec Description Specification of the desired behavior of EgressFirewall. Type object Required egress Property Type Description egress array a collection of egress firewall rule objects egress[] object EgressFirewallRule is a single egressfirewall rule object 6.1.2. .spec.egress Description a collection of egress firewall rule objects Type array 6.1.3. .spec.egress[] Description EgressFirewallRule is a single egressfirewall rule object Type object Required to type Property Type Description ports array ports specify what ports and protocols the rule applies to ports[] object EgressFirewallPort specifies the port to allow or deny traffic to to object to is the target that traffic is allowed/denied to type string type marks this as an "Allow" or "Deny" rule 6.1.4. .spec.egress[].ports Description ports specify what ports and protocols the rule applies to Type array 6.1.5. .spec.egress[].ports[] Description EgressFirewallPort specifies the port to allow or deny traffic to Type object Required port protocol Property Type Description port integer port that the traffic must match protocol string protocol (tcp, udp, sctp) that the traffic must match. 6.1.6. .spec.egress[].to Description to is the target that traffic is allowed/denied to Type object Property Type Description cidrSelector string cidrSelector is the CIDR range to allow/deny traffic to. If this is set, dnsName and nodeSelector must be unset. dnsName string dnsName is the domain name to allow/deny traffic to. If this is set, cidrSelector and nodeSelector must be unset. For a wildcard DNS name, the ' ' will match only one label. Additionally, only a single ' ' can be used at the beginning of the wildcard DNS name. For example, '*.example.com' will match 'sub1.example.com' but won't match 'sub2.sub1.example.com' nodeSelector object nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. 6.1.7. .spec.egress[].to.nodeSelector Description nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.8. .spec.egress[].to.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.9. .spec.egress[].to.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.10. .status Description Observed status of EgressFirewall Type object Property Type Description messages array (string) status string 6.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressfirewalls GET : list objects of kind EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls DELETE : delete collection of EgressFirewall GET : list objects of kind EgressFirewall POST : create an EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} DELETE : delete an EgressFirewall GET : read the specified EgressFirewall PATCH : partially update the specified EgressFirewall PUT : replace the specified EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name}/status GET : read status of the specified EgressFirewall PATCH : partially update status of the specified EgressFirewall PUT : replace status of the specified EgressFirewall 6.2.1. /apis/k8s.ovn.org/v1/egressfirewalls HTTP method GET Description list objects of kind EgressFirewall Table 6.1. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty 6.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls HTTP method DELETE Description delete collection of EgressFirewall Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressFirewall Table 6.3. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressFirewall Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body EgressFirewall schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 202 - Accepted EgressFirewall schema 401 - Unauthorized Empty 6.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the EgressFirewall HTTP method DELETE Description delete an EgressFirewall Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressFirewall Table 6.10. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressFirewall Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressFirewall Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body EgressFirewall schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty 6.2.4. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name}/status Table 6.16. Global path parameters Parameter Type Description name string name of the EgressFirewall HTTP method GET Description read status of the specified EgressFirewall Table 6.17. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressFirewall Table 6.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.19. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressFirewall Table 6.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.21. Body parameters Parameter Type Description body EgressFirewall schema Table 6.22. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/egressfirewall-k8s-ovn-org-v1 |
Chapter 10. RHEL Installations on IBM Power Servers | Chapter 10. RHEL Installations on IBM Power Servers You can install Red Hat Enterprise Linux on various IBM Power System servers. 10.1. Supported IBM Power Servers You can install Red Hat Enterprise Linux on IBM Power Systems. You can find the complete list of supported IBM Power servers on the Red Hat Ecosystem Catalog . 10.2. Overview of the installation process on PowerVM LPAR by using the HMC You can install RHEL on the PowerVM logical partition (LPAR) by using the Hardware Management Console. A Hardware Management Console (HMC) is a hardware appliance that you can use to administer IBM Power Systems servers. The installation workflow involves the following general steps: Download the RHEL installation ISO. Prepare a bootable physical installation medium based on your installation method. Verify that the Power System is added to the HMC. For more information, see add or remove connections to HMC in the IBM documentation. Configure VIOS and LPAR on the managed system or configure full system LPAR based on the requirements. Log in to the HMC console. Install Red Hat Enterprise Linux. For detailed instructions, see Installing Linux on PowerVM LPAR by using the HMC in the IBM documentation. 10.3. Overview of the installation process on IBM Power Servers with the graphics card You can install RHEL on IBM Power Systems servers with the graphics card. The installation workflow involves the following general steps: Download the RHEL installation ISO. Prepare a bootable physical installation medium based on your installation method. Prepare the machine for RHEL installation. Boot the installer kernel. Install Red Hat Enterprise Linux. Optional: Install IBM Tools Repository to use Service and Productivity tools, IBM Advance Toolchain for Linux on Power, and IBM SDK for PowerLinux. For detailed instructions, see Installing Linux on Power Systems servers with a graphics card in the IBM documentation. Additional resources For instructions to install hardware in a rack, see IBM Knowledge Center and search for your power hardware. 10.4. Overview of the installation process on IBM Power Servers by using the serial console You can install RHEL on IBM Power Systems servers by using the serial console. The installation workflow involves the following general steps: Download the RHEL installation ISO. Prepare a bootable physical installation medium based on your installation method. Prepare your machine for the RHEL installation. Boot the installer kernel. Start a VNC session. For more information, see Preparing a remote installation by using VNC . Install Red Hat Enterprise Linux. Optional: Install IBM Tools Repository to use additional software. For more information, see IBM Linux on Power tools repository . For detailed instructions, see Installing Linux on Power Systems servers by using the serial console in the IBM documentation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/rhel-installations-on-ibm-power-servers_rhel-installer |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/proc_providing-feedback-on-red-hat-documentation |
Getting started with Red Hat build of OpenJDK 21 | Getting started with Red Hat build of OpenJDK 21 Red Hat build of OpenJDK 21 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/getting_started_with_red_hat_build_of_openjdk_21/index |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2021-04-30 09:47:31 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/using_your_subscription |
probe::sunrpc.clnt.shutdown_client | probe::sunrpc.clnt.shutdown_client Name probe::sunrpc.clnt.shutdown_client - Shutdown an RPC client Synopsis sunrpc.clnt.shutdown_client Values om_queue the jiffies queued for xmit clones the number of clones vers the RPC program version number om_rtt the RPC RTT jiffies om_execute the RPC execution jiffies rpccnt the count of RPC calls progname the RPC program name authflavor the authentication flavor prot the IP protocol number prog the RPC program number om_bytes_recv the count of bytes in om_bytes_sent the count of bytes out port the port number om_ntrans the count of RPC transmissions netreconn the count of reconnections om_ops the count of operations tasks the number of references servername the server machine name | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-shutdown-client |
Chapter 26. Compiler and Tools | Chapter 26. Compiler and Tools Package selection now works in system-config-kickstart A bug in the system-config-kickstart graphical Kickstart file creation utility caused the package selection to be unavailable because the tool could not download package information from repositories. This bug is now fixed, and you can now configure package selection in system-config-kickstart again. (BZ# 1272068 ) NVMe devices no longer show up as Unknown in parted and Anaconda Previously, any Non-Volatile Memory Express (NVMe) devices were not being recognized by the Anaconda installer and the parted storage configuration tool during the installation, and were instead being labeled as Model: Unknown (unknown) . This update backports an upstream patch that enables recognition of these devices, and they are now being correctly identified as NVMe Device (nvme) during installation. (BZ#1316239) DBD::MySQL now sends and receives smaller integers correctly on big-endian platforms Previously, the DBD::MySQL Perl driver incorrectly handled integers smaller than 64 bits on big endian platforms. Consequently, tests for prepared statements failed for certain variable sizes on the IBM Z architecture. This bug has been fixed, and the described problem no longer occurs. (BZ#1311646) The version Perl module now supports tainted input and tainted version objects Previously, the version module of Perl was unable to correctly parse tainted input. Consequently, when building a version object from a tainted variable, the version->new() method reported the Invalid version format (non-numeric data) error. This update adds support for parsing tainted input and for printing tainted version objects and strings. (BZ# 1378885 ) The HTTP::Daemon Perl module now supports IPv6 Previously, the HTTP::Daemon Perl module did not support IPv6 addresses. Consequently, when running an HTTP::Daemon::SSL server on an IPv6 address, the server terminated unexpectedly on an attempt to print the IPv6 address with an Arg length for inet_ntoa error message. With this update, the HTTP::Daemon module has been ported from the IO::Socket::INET to the IO::Socket::IP module. As a result, HTTP::Daemon handles IPv6 addresses as expected. (BZ# 1413065 ) GDB shows inline function names in breakpoint listing Previously, the GDB debugger showed caller function names instead of inlined callee function names when listing breakpoints. As a consequence, GDB users were not able to identify breakpoints placed on inline functions from the function name. GDB has been extended to store names of inline callee functions when breakpoints are placed. As a result, GDB now correctly displays names of inline functions when listing breakpoints. (BZ#1228556) Relocation failures at module load time due to wrong GCC alignment fixed Previously, GCC generated code containing .toc sections with 2^0 alignment. As a consequence, relocation failures could occur at module load time. GCC has been changed to generate .toc sections aligned to 2^3. This fix eliminates most cases of occurrence of this bug. (BZ#1487434) The istream::sentry object from the gcc C++ standard library no longer throws exceptions Previously, the istream::sentry object from the gcc C++ standard library did not properly handle exceptions that happen while skipping whitespace. As a consequence, an unexpected exception could occur in the object's code. The constructor for the sentry class has been fixed to catch the exceptions and update the error state of the istream object appropriately. (BZ#1469384) Multiple fixes in gdb on IBM Power Previously, various features of the gdb debugger have been broken on the IBM Power architecture: Record and replay functionality was not available and resulted in error messages or not restoring the register values. Printing short vector return values resulted in wrong values displayed. Single stepping over atomic sequences failed to actually step over them - the program counter did not change. This update fixes these features. (BZ# 1480498 , BZ#1480496, BZ#1480497) GDB no longer crashes when dumping core from a process that terminates Previously, the GDB debugger did not consider that a process can be terminated while GDB is dumping it into a core file. As a consequence, when a dumped program terminated after receiving an unexpected SIGKILL signal, the gcore utility terminated unexpectedly as well. With this update, GDB has been extended to handle this situation. As a result, GDB and the gcore command no longer terminate unexpectedly and create invalid core files. (BZ#1493675) GDB can again dump memory protected by the VM_DONTDUMP flag changes to the GNU Debugger GDB made the behavior of the gcore command more similar to the behavior of the Linux kernel when dumping process memory to increase data security. Consequently, users of GDB could not dump memory protected by the VM_DONTDUMP flag. The new set dump-excluded-mappings setting has been added to GDB to enable dumping of memory with this flag. As a result, users can dump the whole process memory with GDB again. (BZ# 1518243 ) Programs using the CLONE_PTRACE flag on threads now run under strace Previously, programs which set the CLONE_PTRACE flag on new threads caused undefined behavior of the strace tool, because it uses the ptrace() function for its operation. As a consequence, such programs could be neither traced nor executed properly. The strace tool has been modified to ignore threads with an unexpected CLONE_PTRACE flag. As a result, programs which use CLONE_PTRACE execute properly under strace . (BZ#1466535) exiv2 rebased to version 0.26 The exiv2 packages have been upgraded to upstream version 0.26, which provides a number of bug fixes and enhancements over the version. Notably, exiv2 now contains: CMake support for Visual Studio Recursive File Dump ICC Profile Support The exiv2 command for metadata piping Lens File for user lens definitions User defined lens types WebP Support For the complete changelog, see http://www.exiv2.org/changelog.html#v0.26 . (BZ# 1420227 ) gssproxy fixed to properly update ccaches Previously, the gssproxy package did not correctly handle the key version number (kvno) incrementation in Kerberos credential caches (ccaches). As a consequence, stale ccaches were not properly overwritten. This update fixes these problems in gssproxy ccache caching. As a result, ccaches are now properly updated, and the caching prevents excessive requests for updates. (BZ# 1488629 ) gcc on the little-endian variant of IBM Power Systems architecture no longer creates unused stack frames Previously, using the -pg -mprofile=kernel options of the gcc compiler on the little-endian variant of IBM Power Systems architecture could result in unused stack frames being generated for leaf functions. The gcc compiler has been fixed and the unused stack frames no longer occur in this situation. (BZ#1468546) Several bugs fixed in gssproxy This update fixes several bugs in the gssproxy package. The bug fixes include preventing potential memory leaks and concurrency problems. (BZ#1462974) The BFD library regains the ability to convert binary addresses to source code positions A enhancement to the BFD library from the binutils package caused a bug in parsing the DWARF debug information. As a consequence, BFD and all tools using it, such as gprof and perf , were unable to convert binary file addresses to positions in source code. With this update, BFD has been modified to prevent the described problem. As a result, BFD can now convert addresses in binary files into positions in source code as expected. Note that tools that use the BFD library must be relinked in order to take advantage of this fix. (BZ#1465318) Applications using vector registers for passing arguments work again Previously, the dynamic loader in the GNU C library ( glibc ) contained an optimization which avoided saving and restoring vector registers for 64-bit Intel and AMD architectures. Consequently, applications compiled for these architectures and using unsupported vector registers for passing function arguments, not adhering to the published x86-64 psABI specification, could fail and produce unexpected results. This update changes the dynamic loader to use the XSAVE and XSAVEC context switch CPU instructions, preserving more CPU state, including all vector registers. As a result, applications using vector registers for argument passing, in ways which are not supported by the x86-64 psABI specification, work again. (BZ#1504969) curl now properly resets the HTTP authentication state Prior to this update, the authentication state was not reset properly when an HTTP transfer finished or when the 'curl_easy_reset()' function was called. Consequently, the curl tool did not send the request body to the following URL. With this update, the authentication state is reset properly when an HTTP transfer is done or when curl_easy_reset() is called, and the described problem no longer occurs. (BZ#1511523) The strip utility works again Previously, the BFD library missed a NULL pointer check on the IBM Z architecture. As a consequence, running the strip utility caused a segmentation fault. This bug has been fixed, and strip now works as expected. (BZ# 1488889 ) Importing python modules generated by f2py now works properly Previously, when dynamic linking loader was configured to load symbols globally, a segmentation fault occurred when importing any python module generated by the f2py utility. This update renames the PyArray_API symbol to _npy_f2py_ARRAY_API , which prevents potential conflicts with the same symbol in the multiarray module. As a result, importing modules generated by f2py no longer leads to a segmentation fault. (BZ# 1167156 ) mailx is not encoding multi-byte subjects properly Previously, the mailx mail user agent did not split non-ASCII message headers on multi-byte character boundaries when encoding into the Multipurpose Internet Mail Extension (MIME) standard. As a consequence, the headers were incorrectly decoded. This update modifies the MIME encoding function so that it splits headers into encoded words on multi-byte character boundaries. As a result, mailx now sends messages with headers that can be properly decoded. (BZ#1474130) The --all-logs option now works as expected in sosreport Previously, the --all-logs option was ignored by the apache , nscd , and logs plug-ins of the sosreport utility. This bug has been fixed, and the mentioned plug-ins now correctly handle --all-logs . Note that when using --all-logs , it is impossible to limit the size of the log with the --log-size option, which is an expected behavior. (BZ#1183243) Python scripts can now correctly connect to HTTPS servers through a proxy, while explicitly setting the port The Python standard library provided in Red Hat Enterprise Linux was previously updated to enable certificate verification by default. However, a bug prevented Python scripts using the standard library from connecting to HTTPS servers using a proxy when explicitly setting the port to connect to. The same bug also prevented users from using the bootstrap script for registration with Red Hat Satellite 6 through a proxy. This bug is now fixed, and scripts can now connect to HTTPS servers and register using Red Hat Satellite as expected. (BZ# 1483438 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_compiler_and_tools |
Chapter 1. Installing the client package | Chapter 1. Installing the client package Install the Insights client on each system. Procedure Enter the following command to install the current version of the Insights client: 1.1. Configuring authentication Once you have installed the client package, you need to configure authentication. Use one of two methods: Activation keys (recommended) Registering the Insights client with Red Hat Subscription Manager (RHSM) For more information about authentication, refer to Client Configuration Guide for Red Hat Insights . | [
"yum install insights-client"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/deploying_red_hat_insights_on_existing_rhel_systems_managed_by_red_hat_cloud_access/installing-insights-client_deploying-insights-with-rhca |
Chapter 17. Groovy | Chapter 17. Groovy Overview Groovy is a Java-based scripting language that allows quick parsing of object. The Groovy support is part of the camel-groovy module. Adding the script module To use Groovy in your routes you need to add a dependencies on camel-groovy to your project as shown in Example 17.1, "Adding the camel-groovy dependency" . Example 17.1. Adding the camel-groovy dependency Static import To use the groovy() static method in your application code, include the following import statement in your Java source files: Built-in attributes Table 17.1, "Groovy attributes" lists the built-in attributes that are accessible when using Groovy. Table 17.1. Groovy attributes Attribute Type Value context org.apache.camel.CamelContext The Camel Context exchange org.apache.camel.Exchange The current Exchange request org.apache.camel.Message The IN message response org.apache.camel.Message The OUT message properties org.apache.camel.builder.script.PropertiesFunction Function with a resolve method to make it easier to use the properties component inside scripts. The attributes all set at ENGINE_SCOPE . Example Example 17.2, "Routes using Groovy" shows two routes that use Groovy scripts. Example 17.2. Routes using Groovy Using the properties component To access a property value from the properties component, invoke the resolve method on the built-in properties attribute, as follows: Where PropKey is the key of the property you want to resolve, where the key value is of String type. For more details about the properties component, see Properties in the Apache Camel Component Reference Guide . Customizing Groovy Shell Sometimes, you might need to use the custom GroovyShell instance, in your Groovy expressions. To provide custom GroovyShell , add an implementation of the org.apache.camel.language.groovy.GroovyShellFactory SPI interface to your Camel registry. For example, when you add the following bean to your Spring context, Apache Camel will use the custom GroovyShell instance that includes the custom static imports, instead of the default one. | [
"<!-- Maven POM File --> <properties> <camel-version>2.23.2.fuse-7_13_0-00013-redhat-00001</camel-version> </properties> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-groovy</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>",
"import static org.apache.camel.builder.script.ScriptBuilder.*;",
"<camelContext> <route> <from uri=\"direct:items\" /> <filter> <language language=\"groovy\">request.lineItems.any { i -> i.value > 100 }</language> <to uri=\"mock:mock1\" /> </filter> </route> <route> <from uri=\"direct:in\"/> <setHeader headerName=\"firstName\"> <language language=\"groovy\">USDuser.firstName USDuser.lastName</language> </setHeader> <to uri=\"seda:users\"/> </route> </camelContext>",
".setHeader(\"myHeader\").groovy(\"properties.resolve( PropKey )\")",
"public class CustomGroovyShellFactory implements GroovyShellFactory { public GroovyShell createGroovyShell(Exchange exchange) { ImportCustomizer importCustomizer = new ImportCustomizer(); importCustomizer.addStaticStars(\"com.example.Utils\"); CompilerConfiguration configuration = new CompilerConfiguration(); configuration.addCompilationCustomizers(importCustomizer); return new GroovyShell(configuration); } }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/Groovy |
Developing and Managing Integrations Using Camel K | Developing and Managing Integrations Using Camel K Red Hat build of Apache Camel K 1.10.5 A developer's guide to Camel K Red Hat build of Apache Camel K Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/developing_and_managing_integrations_using_camel_k/index |
Chapter 5. Working with Helm charts | Chapter 5. Working with Helm charts 5.1. Understanding Helm Helm is a software package manager that simplifies deployment of applications and services to OpenShift Dedicated clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Dedicated resources. Creating a chart in a cluster creates a running instance of the chart known as a release . Each time a chart is created, or a release is upgraded or rolled back, an incremental revision is created. 5.1.1. Key features Helm provides the ability to: Search through a large collection of charts stored in the chart repository. Modify existing charts. Create your own charts with OpenShift Dedicated or Kubernetes resources. Package and share your applications as charts. 5.1.2. Red Hat Certification of Helm charts for OpenShift You can choose to verify and certify your Helm charts by Red Hat for all the components you will be deploying on the Red Hat OpenShift Dedicated. Charts go through an automated Red Hat OpenShift certification workflow that guarantees security compliance as well as best integration and experience with the platform. Certification assures the integrity of the chart and ensures that the Helm chart works seamlessly on Red Hat OpenShift clusters. 5.1.3. Additional resources For more information on how to certify your Helm charts as a Red Hat partner, see Red Hat Certification of Helm charts for OpenShift . For more information on OpenShift and Container certification guides for Red Hat partners, see Partner Guide for OpenShift and Container Certification . For a list of the charts, see the Red Hat Helm index file . You can view the available charts at the Red Hat Marketplace . For more information, see Using the Red Hat Marketplace . 5.2. Installing Helm The following section describes how to install Helm on different platforms using the CLI. You can also find the URL to the latest binaries from the OpenShift Dedicated web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Prerequisites You have installed Go, version 1.13 or higher. 5.2.1. On Linux Download the Linux x86_64 or Linux amd64 Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 5.2.2. On Windows 7/8 Download the latest .exe file and put in a directory of your preference. Right click Start and click Control Panel . Select System and Security and then click System . From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom. Select Path from the Variable section and click Edit . Click New and type the path to the folder with the .exe file into the field or click Browse and select the directory, and click OK . 5.2.3. On Windows 10 Download the latest .exe file and put in a directory of your preference. Click Search and type env or environment . Select Edit environment variables for your account . Select Path from the Variable section and click Edit . Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK . 5.2.4. On MacOS Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 5.3. Configuring custom Helm chart repositories The Developer Catalog , in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see the Red Hat Helm index file . As a cluster administrator, you can add multiple cluster-scoped and namespace-scoped Helm chart repositories, separate from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . As a regular user or project member with the appropriate role-based access control (RBAC) permissions, you can add multiple namespace-scoped Helm chart repositories, apart from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . In the Developer perspective of the web console, you can use the Helm page to: Create Helm Releases and Repositories using the Create button. Create, update, or delete a cluster-scoped or namespace-scoped Helm chart repository. View the list of the existing Helm chart repositories in the Repositories tab, which can also be easily distinguished as either cluster scoped or namespace scoped. 5.3.1. Creating Helm releases using the Developer perspective You can use either the Developer perspective in the web console or the CLI to select and create a release from the Helm charts listed in the Developer Catalog . You can create Helm releases by installing Helm charts and see them in the Developer perspective of the web console. Prerequisites You have logged in to the web console and have switched to the Developer perspective. Procedure To create Helm releases from the Helm charts provided in the Developer Catalog : In the Developer perspective, navigate to the +Add view and select a project. Then click Helm Chart option to see all the Helm Charts in the Developer Catalog . Select a chart and read the description, README, and other details about the chart. Click Create . Figure 5.1. Helm charts in developer catalog In the Create Helm Release page: Enter a unique name for the release in the Release Name field. Select the required chart version from the Chart Version drop-down list. Configure your Helm chart by using the Form View or the YAML View . Note Where available, you can switch between the YAML View and Form View . The data is persisted when switching between the views. Click Create to create a Helm release. The web console displays the new release in the Topology view. If a Helm chart has release notes, the web console displays them. If a Helm chart creates workloads, the web console displays them on the Topology or Helm release details page. The workloads are DaemonSet , CronJob , Pod , Deployment , and DeploymentConfig . View the newly created Helm release in the Helm Releases page. You can upgrade, rollback, or delete a Helm release by using the Actions button on the side panel or by right-clicking a Helm release. 5.3.2. Using Helm in the web terminal You can use Helm by Accessing the web terminal in the Developer perspective of the web console. 5.3.3. Creating a custom Helm chart on OpenShift Dedicated Procedure Create a new project: USD oc new-project nodejs-ex-k Download an example Node.js chart that contains OpenShift Dedicated objects: USD git clone https://github.com/redhat-developer/redhat-helm-charts Go to the directory with the sample chart: USD cd redhat-helm-charts/alpha/nodejs-ex-k/ Edit the Chart.yaml file and add a description of your chart: apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5 1 The chart API version. It should be v2 for Helm charts that require at least Helm 3. 2 The name of your chart. 3 The description of your chart. 4 The URL to an image to be used as an icon. 5 The Version of your chart as per the Semantic Versioning (SemVer) 2.0.0 Specification. Verify that the chart is formatted properly: USD helm lint Example output [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed Navigate to the directory level: USD cd .. Install the chart: USD helm install nodejs-chart nodejs-ex-k Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0 5.3.4. Filtering Helm Charts by their certification level You can filter Helm charts based on their certification level in the Developer Catalog . Procedure In the Developer perspective, navigate to the +Add view and select a project. From the Developer Catalog tile, select the Helm Chart option to see all the Helm charts in the Developer Catalog . Use the filters to the left of the list of Helm charts to filter the required charts: Use the Chart Repositories filter to filter charts provided by Red Hat Certification Charts or OpenShift Helm Charts . Use the Source filter to filter charts sourced from Partners , Community , or Red Hat . Certified charts are indicated with the ( ) icon. Note The Source filter will not be visible when there is only one provider type. You can now select the required chart and install it. 5.4. Working with Helm releases You can use the Developer perspective in the web console to update, rollback, or delete a Helm release. 5.4.1. Prerequisites You have logged in to the web console and have switched to the Developer perspective. 5.4.2. Upgrading a Helm release You can upgrade a Helm release to upgrade to a new chart version or update your release configuration. Procedure In the Topology view, select the Helm release to see the side panel. Click Actions Upgrade Helm Release . In the Upgrade Helm Release page, select the Chart Version you want to upgrade to, and then click Upgrade to create another Helm release. The Helm Releases page displays the two revisions. 5.4.3. Rolling back a Helm release If a release fails, you can rollback the Helm release to a version. Procedure To rollback a release using the Helm view: In the Developer perspective, navigate to the Helm view to see the Helm Releases in the namespace. Click the Options menu adjoining the listed release, and select Rollback . In the Rollback Helm Release page, select the Revision you want to rollback to and click Rollback . In the Helm Releases page, click on the chart to see the details and resources for that release. Go to the Revision History tab to see all the revisions for the chart. Figure 5.2. Helm revision history If required, you can further use the Options menu adjoining a particular revision and select the revision to rollback to. 5.4.4. Deleting a Helm release Procedure In the Topology view, right-click the Helm release and select Delete Helm Release . In the confirmation prompt, enter the name of the chart and click Delete . | [
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/working-with-helm-charts |
Chapter 7. Message delivery | Chapter 7. Message delivery 7.1. Writing to a streamed large message To write to a large message, use the BytesMessage.writeBytes() method. The following example reads bytes from a file and writes them to a message: Example: Writing to a streamed large message BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); } 7.2. Reading from a streamed large message To read from a large message, use the BytesMessage.readBytes() method. The following example reads bytes from a message and writes them to a file: Example: Reading from a streamed large message BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); } 7.3. Using message groups Message groups are sets of messages that have the following characteristics: Messages in a message group share the same group ID. That is, they have same group identifier property. For JMS messages, the property is JMSXGroupID . Messages in a message group are always consumed by the same consumer, even if there are many consumers on a queue. Another consumer is chosen to receive a message group if the original consumer is closed. Message groups are useful when you want all messages for a certain value of the property to be processed serially by the same consumer. For example, you may want orders for any particular stock purchase to be processed serially by the same consumer. To do this, you could create a pool of consumers and then set the stock name as the value of the message property. This ensures that all messages for a particular stock are always processed by the same consumer. Setting the group ID The examples below show how to use message groups with AMQ Core Protocol JMS. Procedure If you are using JNDI to establish a JMS connection factory for your JMS client, add the groupID parameter and supply a value. All messages sent using this connection factory have the property JMSXGroupID set to the specified value. If you are not using JNDI, set the JMSXGroupID property using the setStringProperty() method. Message message = new TextMessage(); message.setStringProperty("JMSXGroupID", "MyGroup"); producer.send(message); Additional resources See message-group and message-group2 under <install-dir> /examples/features/standard for working examples of how message groups are configured and used. 7.4. Using duplicate message detection AMQ Broker includes automatic duplicate message detection, which filters out any duplicate messages it receives so you do not have to code your own duplicate detection logic. To enable duplicate message detection, provide a unique value for the message property _AMQ_DUPL_ID . When a broker receives a message, it checks if _AMQ_DUPL_ID has a value. If it does, the broker then checks in its memory cache to see if it has already received a message with that value. If a message with the same value is found, the incoming message is ignored. If you are sending messages in a transaction, you do not have to set _AMQ_DUPL_ID for every message in the transaction, but only in one of them. If the broker detects a duplicate message for any message in the transaction, it ignores the entire transaction. Setting the duplicate ID message property The following example shows how to set the duplicate detection property using AMQ Core Protocol JMS. Note that for convenience, the clients use the value of the constant org.apache.activemq.artemis.api.core.Message.HDR_DUPLICATE_DETECTION_ID for the name of the duplicate ID property, _AMQ_DUPL_ID . Procedure Set the value for _AMQ_DUPL_ID to a unique string value. Message jmsMessage = session.createMessage(); String myUniqueID = "This is my unique id"; message.setStringProperty(HDR_DUPLICATE_DETECTION_ID.toString(), myUniqueID); 7.5. Using message interceptors With AMQ Core Protocol JMS you can intercept packets entering or exiting the client, allowing you to audit packets or filter messages. Interceptors can change the packets that they intercept. This makes interceptors powerful, but also a feature that you should use with caution. Interceptors must implement the intercept() method, which returns a boolean value. If the returned value is true , the message packet continues onward. If the returned value is false , the process is aborted, no other interceptors are called, and the message packet is not processed further. Message interception occurs transparently to the main client code except when an outgoing packet is sent in blocking send mode. When an outgoing packet is sent with blocking enabled and that packet encounters an interceptor that returns false , an ActiveMQException is thrown to the caller. The thrown exception contains the name of the interceptor. Your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. The client interceptor classes and their dependencies must be added to the Java classpath of the client to be properly instantiated and invoked. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } | [
"BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); }",
"BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); }",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?groupID=MyGroup",
"Message message = new TextMessage(); message.setStringProperty(\"JMSXGroupID\", \"MyGroup\"); producer.send(message);",
"Message jmsMessage = session.createMessage(); String myUniqueID = \"This is my unique id\"; message.setStringProperty(HDR_DUPLICATE_DETECTION_ID.toString(), myUniqueID);",
"package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/message_delivery |
Deploying OpenShift Data Foundation on VMware vSphere | Deploying OpenShift Data Foundation on VMware vSphere Red Hat OpenShift Data Foundation 4.18 Instructions on deploying OpenShift Data Foundation using VMware vSphere infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on VMware vSphere clusters. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) vSphere clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. To deploy OpenShift Data Foundation, start with the requirements in the Preparing to deploy OpenShift Data Foundation chapter and then follow any one of the below deployment process for your environment: Internal mode Deploy using dynamic storage devices Deploy using local storage devices Deploy standalone Multicloud Object Gateway External mode Deploying OpenShift Data Foundation in external mode Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Verify the rotational flag on your VMDKs before deploying object storage devices (OSDs) on them. For more information, see the knowledgebase article Override device rotational flag in ODF environment . Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Token authentication method . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices . These are not applicable for deployment using dynamic storage devices. 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . Chapter 2. Deploy using dynamic storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by VMware vSphere (disk format: thin) provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Note Both internal and external OpenShift Data Foundation clusters are supported on VMware vSphere. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 2.3.1.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.3.1.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . For VMs on VMware, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab . For more information, see Installing on vSphere . Optional: If you want to use thick-provisioned storage for flexibility, you must create a storage class with zeroedthick or eagerzeroedthick disk format. For information, see VMware vSphere object definition . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to thin-csi . If you have created a storage class with zeroedthick or eagerzeroedthick disk format for thick-provisioned storage, then that storage class is listed in addition to the default, thin-csi storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Spread the worker nodes across three different physical nodes, racks, or failure domains for high availability. Use vCenter anti-affinity to align OpenShift Data Foundation rack labels with physical nodes and racks in the data center to avoid scheduling two worker nodes on the same physical chassis. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of the aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select the Taint nodes checkbox to make selected nodes dedicated for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. Chapter 3. Deploy using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the steps. Installing Local Storage Operator Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating OpenShift Data Foundation cluster on VMware vSphere VMware vSphere supports the following three types of local storage: Virtual machine disk (VMDK) Raw device mapping (RDM) VMDirectPath I/O Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node to use local storage devices on VMware. Ensure that the disk type is SSD, which is the only supported disk type. For VMs on VMware vSphere, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab. For more information, see Installing on vSphere . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Select one of the following: Disks on all nodes to use the available disks that match the selected filters on all nodes. Disks on selected nodes to use the available disks that match the selected filters only on selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes is spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, the flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. Chapter 4. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 4.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw Chapter 5. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. You can deploy the Multicloud Object Gateway component either using dynamic storage devices or using the local storage devices. 5.1. Deploy standalone Multicloud Object Gateway using dynamic storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway After deploying the component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . 5.1.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.1.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) 5.2. Deploy standalone Multicloud Object Gateway using local storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . 5.2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.2.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"oc annotate namespace openshift-storage openshift.io/node-selector="
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-local-storage-devices-vmware |
8.84. ksh | 8.84. ksh 8.84.1. RHBA-2013:1599 - ksh bug fix and enhancement update Updated ksh packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. KornShell (KSH) is a Unix shell developed by AT&T Bell Laboratories, which is backward-compatible with the Bourne shell (Bash) and includes many features of the C shell. The most recent version is KSH-93. KornShell complies with the POSIX.2 standard (IEEE Std 1003.2-1992). Note The ksh package has been upgraded to upstream version 20120801, which provides a number of bug fixes and enhancements over the version. (BZ# 840568 ) Bug Fixes BZ# 761551 Previously, the ksh shell did not set any editing mode as default, which caused various usability problems in interactive mode and with shell auto-completion. This update sets emacs editing mode as default for new users. As a result, the usability is significantly improved and the shell auto-completion works as expected. BZ# 858263 Previously, the ksh internal counter of jobs was too small. Consequently, when a script used a number of subshells in a loop, a counter overflow could occur causing the ksh shell to terminate unexpectedly with a segmentation fault. This update modifies ksh to use bigger types for counter variables. As a result, ksh no longer crashes in the described scenario. BZ# 903750 Previously, the ksh shell did not compute an offset for fixed size variables correctly. As a consequence, when assigning a right-justified variable with a fixed width to a smaller variable, the new variable could have an incorrect content. This update applies a patch to fix this bug and the assignment now proceeds as expected. BZ# 913110 Previously, the output of command substitutions was not always redirected properly. Consequently, the output in a here-document could be lost. This update fixes the redirection code for command substitutions and the here-document now contains the output as expected. BZ# 921455 , BZ# 982142 Using arrays inside of ksh functions, command aliases, or automatically loaded functions caused memory leaks to occur. The underlying source code has been modified to fix this bug and the memory leaks no longer occur in the described scenario. BZ# 922851 Previously, the ksh SIGTSTP signal handler could trigger another SIGTSTP signal. Consequently, ksh could enter an infinite loop. This updated version fixes the SIGTSTP signal processing and ksh now handles the signal without any problems. BZ# 924440 Previously, the ksh shell did not resize the file descriptor list every time it was necessary. This could lead to memory corruption when several file descriptors were used. As a consequence, ksh terminated unexpectedly. This updated version resizes the file descriptor list every time it is needed, and ksh no longer crashes in the described scenario. BZ# 960034 Previously, the ksh shell ignored the "-m" argument specified by the command line. As a consequence, ksh did not enable monitor mode and the user had to enable it in a script. With this update, ksh no longer ignores the argument so that the user is able to enable monitor mode from the command line as expected. BZ# 994251 The ksh shell did not handle I/O redirections from command substitutions inside a pipeline correctly. Consequently, the output of certain commands could be lost. With this update, the redirections have been fixed and data is no longer missing from the command outputs. Users of ksh are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/ksh |
Updating clusters | Updating clusters OpenShift Container Platform 4.10 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/updating_clusters/index |
3.2.2. Remove Unused Devices | 3.2.2. Remove Unused Devices Removing unused or unnecessary devices can improve performance. For instance, a guest tasked as a web server is unlikely to require audio features or an attached tablet. Refer to the following example screen capture of the virt-manager tool. Click the Remove button to remove unnecessary devices: Figure 3.2. Remove unused devices | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sec-virt-manager-tuning-removing-unused-devices |
Appendix A. Certificate Profile Input and Output Reference | Appendix A. Certificate Profile Input and Output Reference Profile inputs and outputs define the expected input parameters in the certificate request and the output format of the enrollment result. Like many other components in Red Hat Certificate System, profile inputs and outputs are implemented as JAVA plug-ins to offer customization and flexibility. This appendix provides reference for the default input and output plug-ins. Section A.1, "Input Reference" Section A.2, "Output Reference" A.1. Input Reference An input puts certain fields on the enrollment page associated with a particular certificate profile. The inputs set for a certificate profile are used to generate the enrollment page dynamically with the appropriate fields; these input fields collect necessary information for the profile to generate the final certificate. A.1.1. Certificate Request Input The Certificate Request input is used for enrollments in which a certificate request is pasted into the enrollment form. It allows the request format to be set from a drop-down list and provides an input field to paste the request. This input puts the following fields in the enrollment form: Certificate Request Type . This drop-down menu lets the user specify the certificate request type. The choices are PKCS #10 or CRMF. Certificate Management Messages over Cryptographic Message Syntax (CMC) enrollment is supported with both PKCS #10 and CRMF. Certificate Request . This is the text area in which to paste the request. Example A.1. A.1.2. CMC Certificate Request Input The CMC Certificate Request input is used for enrollments using a Certificate Message over CMS (CMC) certificate request is submitted in the request form. The request type must be either PKCS #10 or CRMF, and the only field is the Certificate Request text area in which to paste the request. Example A.2. A.1.3. Dual Key Generation Input The Dual Key Generation input is for enrollments in which dual key pairs will be generated, and thus two certificates issued, one for signing and one for encryption. This input puts the following fields into the enrollment form: Key Generation Request Type . This field is a read-only field displaying crmf as the request type. Key Generation Request . This field sets the selection for the key size in the key generation request for both encryption and signing certificates. Example A.3. A.1.4. File-Signing Input The File-Signing input sets the fields to sign a file to show it has not been tampered with. This input creates the following fields: Key Generation Request Type . This field is a read-only field displaying crmf as the request type. Key Generation Request . This input adds a drop-down menu to select the key size to use in the key generation request. URL Of File Being Signed . This gives the location of the file which is to be signed. Text Being Signed . This gives the filename. Example A.4. A.1.5. Image Input The Image input sets the field to sign an image file. The only field which this input creates is Image URL , which gives the location of the image which is to be signed. A.1.6. Key Generation Input The Key Generation input is used for enrollments in which a single key pair will be generated, generally user-based certificate enrollments. This input puts the following fields into the enrollment form: Key Generation Request Type . This field is a read-only field displaying crmf as the request type. Key Generation Request . This input adds a drop-down menu to select the key size to use in the key generation request. Example A.5. A.1.7. nsHKeyCertRequest (Token Key) Input The Token Key input is used to enroll keys for hardware tokens for agents to use later for certificate-based authentication. This input puts the following fields into the enrollment form: Token Key CUID . This field gives the CUID (contextually unique user ID) for the token device. Token Key User Public Key . This field must contain the token user's public key. Example A.6. A.1.8. nsNKeyCertRequest (Token User Key) Input The Token User Key input is used to enroll keys for the user of a hardware token, for agents to use the token later for certificate-based authentication. This input puts the following fields into the enrollment form: Token Key User UID . This field gives the UID for the LDAP entry of the user of the token device. Token Key User Public Key . This field must contain the token user's public key. Example A.7. A.1.9. Serial Number Renewal Input The Serial Number Renewal Input is used to set the serial number of an existing certificate so that the CA can pull the original certificate entry and use the information to regenerate the certificate. The input inserts a Serial Number field into the enrollment form. This is the only input that needs to be used with a renewal form; all the other information is supplied by the certificate entry. Example A.8. A.1.10. Subject DN Input The Subject DN input allows the user to input the specific DN to set as the certificate subject name, and the input inserts a single Subject Name field into the enrollment form. Example A.9. A.1.11. Subject Name Input The Subject Name input is used for enrollment when DN parameters need to be collected from the user. The parameters are used to formulate the subject name in the certificate. This input puts the following fields into the enrollment form: UID (the LDAP directory user ID) Email Common Name (the name of the user) Organizational Unit (the organizational unit ( ou ) to which the user belongs) Organization (the organization name) Country (the country where the user is located) Example A.10. A.1.12. Submitter Information Input The Submitter Information input collects the certificate requester's information such as name, email, and phone. This input puts the following fields into the enrollment form: Requester Name Requester Email Requester Phone Example A.11. A.1.13. Generic Input The Generic Input allows admins to specify any number of input fields to be used with extension plug-ins that handle patterns. For example, the ccm and GUID parameters are used in the patterned Subject Alternative Name Extension Default plug-in: Example A.12. A.1.14. Subject Alternative Name Extension Input The Subject Alternative Name Extension Input is used along with the Subject Alternative Name Extension Default plug-in. It allows admins to enable the numbered parameters in URI with the pattern req_san_pattern_# into the input and therefore the SubjectAltNameExt extension. For example, URI containing: injects host0.Example.com and host1.Example.com into the SubjectAltNameExt extension from the profile below. Example A.13. | [
"caAdminCert.cfg:input.i1.class_id=certReqInputImpl",
"caCMCUserCert.cfg:input.i1.class_id=cmcCertReqInputImpl",
"caDualCert.cfg:input.i1.class_id=dualKeyGenInputImpl",
"caAgentFileSigning.cfg:input.i2.class_id=fileSigningInputImpl",
"caDualCert.cfg:input.i1.class_id=keyGenInputImpl",
"caTempTokenDeviceKeyEnrollment.cfg:input.i1.class_id=nsHKeyCertReqInputImpl",
"caTempTokenUserEncryptionKeyEnrollment.cfg:input.i1.class_id=nsNKeyCertReqInputImpl",
"caTokenUserEncryptionKeyRenewal.cfg:input.i1.class_id=serialNumRenewInputImpl",
"caAdminCert.cfg:input.i3.class_id=subjectDNInputImpl",
"caDualCert.cfg:input.i2.class_id=subjectNameInputImpl",
"caAdminCert.cfg:input.i2.class_id=submitterInfoInputImpl",
"input.i3.class_id=genericInputImpl input.i3.params.gi_display_name0=ccm input.i3.params.gi_param_enable0=true input.i3.params.gi_param_name0=ccm input.i3.params.gi_display_name1=GUID input.i3.params.gi_param_enable1=true input.i3.params.gi_param_name1=GUID input.i3.params.gi_num=2 ... policyset.set1.p6.default.class_id=subjectAltNameExtDefaultImpl policyset.set1.p6.default.name=Subject Alternative Name Extension Default policyset.set1.p6.default.params.subjAltExtGNEnable_0=true policyset.set1.p6.default.params.subjAltExtGNEnable_1=true policyset.set1.p6.default.params.subjAltExtPattern_0=USDrequest.ccmUSD policyset.set1.p6.default.params.subjAltExtType_0=DNSName policyset.set1.p6.default.params.subjAltExtPattern_1=(Any)1.3.6.1.4.1.311.25.1,0410USDrequest.GUIDUSD policyset.set1.p6.default.params.subjAltExtType_1=OtherName policyset.set1.p6.default.params.subjAltNameExtCritical=false policyset.set1.p6.default.params.subjAltNameNumGNs=2",
"...&req_san_pattern_0=host0.Example.com&req_san_pattern_1=host1.Example.com",
"input.i3.class_id=subjectAltNameExtInputImpl input.i3.name=subjectAltNameExtInputImpl ... policyset.serverCertSet.9.constraint.class_id=noConstraintImpl policyset.serverCertSet.9.constraint.name=No Constraint policyset.serverCertSet.9.default.class_id=subjectAltNameExtDefaultImpl policyset.serverCertSet.9.default.name=Subject Alternative Name Extension Default policyset.serverCertSet.9.default.params.subjAltExtGNEnable_0=true policyset.serverCertSet.9.default.params.subjAltExtPattern_0=USDrequest.req_san_pattern_0USD policyset.serverCertSet.9.default.params.subjAltExtType_0=DNSName policyset.serverCertSet.9.default.params.subjAltExtGNEnable_1=true policyset.serverCertSet.9.default.params.subjAltExtPattern_1=USDrequest.req_san_pattern_1USD policyset.serverCertSet.9.default.params.subjAltExtType_1=DNSName policyset.serverCertSet.9.default.params.subjAltExtGNEnable_2=false policyset.serverCertSet.9.default.params.subjAltExtPattern_2=USDrequest.req_san_pattern_2USD policyset.serverCertSet.9.default.params.subjAltExtType_2=DNSName policyset.serverCertSet.9.default.params.subjAltNameExtCritical=false policyset.serverCertSet.9.default.params.subjAltNameNumGNs=3"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/certprofilereference |
Appendix B. Understanding the node_prep_inventory.yml file | Appendix B. Understanding the node_prep_inventory.yml file The node_prep_inventory.yml file is an example Ansible inventory file that you can use to prepare a replacement host for your Red Hat Hyperconverged Infrastructure for Virtualization cluster. You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/node_prep_inventory.yml on any hyperconverged host. B.1. Configuration parameters for preparing a replacement node B.1.1. Hosts to configure hc_nodes A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host's back-end FQDN. Configuration that is common to all hosts is defined in the vars: section. B.1.2. Multipath devices blacklist_mpath_devices (optional) By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format /dev/mapper/<WWID> instead of /dev/sdx when defined in subsequent sections of the inventory file. On a server with four devices ( sda , sdb , sdc and sdd ), the following configuration blacklists only two devices. The path format /dev/mapper/<WWID> is expected for devices not in this list. Important Do not list encrypted devices ( luks_* devices) in blacklist_mpath_devices , as they require multipath configuration to work. B.1.3. Deduplication and compression gluster_infra_vdo (optional) Include this section to define a list of devices to use deduplication and compression. These devices require the /dev/mapper/<name> path format when you define them as volume groups in gluster_infra_volume_groups . Each device listed must have the following information: name A short name for the VDO device, for example vdo_sdc . device The device to use, for example, /dev/sdc . logicalsize The logical size of the VDO volume. Set this to ten times the size of the physical disk, for example, if you have a 500 GB disk, set logicalsize: '5000G' . emulate512 If you use devices with a 4 KB block size, set this to on . slabsize If the logical size of the volume is 1000 GB or larger, set this to 32G . If the logical size is smaller than 1000 GB, set this to 2G . blockmapcachesize Set this to 128M . writepolicy Set this to auto . For example: B.1.4. Storage infrastructure gluster_infra_volume_groups (required) This section creates the volume groups that contain the logical volumes. gluster_infra_mount_devices (required) This section creates the logical volumes that form Gluster bricks. gluster_infra_thinpools (optional) This section defines logical thin pools for use by thinly provisioned volumes. Thin pools are not suitable for the engine volume, but can be used for the vmstore and data volume bricks. vgname The name of the volume group that contains this thin pool. thinpoolname A name for the thin pool, for example, gluster_thinpool_sdc . thinpoolsize The sum of the sizes of all logical volumes to be created in this volume group. poolmetadatasize Set to 16G ; this is the recommended size for supported deployments. gluster_infra_cache_vars (optional) This section defines cache logical volumes to improve performance for slow devices. A fast cache device is attached to a thin pool, and requires gluster_infra_thinpool to be defined. vgname The name of a volume group with a slow device that requires a fast external cache. cachedisk The paths of the slow and fast devices, separated with a comma, for example, to use a cache device sde with the slow device sdb , specify /dev/sdb,/dev/sde . cachelvname A name for this cache logical volume. cachethinpoolname The thin pool to which the fast cache volume is attached. cachelvsize The size of the cache logical volume. Around 0.01% of this size is used for cache metadata. cachemode The cache mode. Valid values are writethrough and writeback . gluster_infra_thick_lvs (required) The thickly provisioned logical volumes that are used to create bricks. Bricks for the engine volume must be thickly provisioned. vgname The name of the volume group that contains the logical volume. lvname The name of the logical volume. size The size of the logical volume. The engine logical volume requires 100G . gluster_infra_lv_logicalvols (required) The thinly provisioned logical volumes that are used to create bricks. vgname The name of the volume group that contains the logical volume. thinpool The thin pool that contains the logical volume, if this volume is thinly provisioned. lvname The name of the logical volume. size The size of the logical volume. The engine logical volume requires 100G . gluster_infra_disktype (required) Specifies the underlying hardware configuration of the disks. Set this to the value that matches your hardware: RAID6 , RAID5 , or JBOD . gluster_infra_diskcount (required) Specifies the number of data disks in the RAID set. For a JBOD disk type, set this to 1 . gluster_infra_stripe_unit_size (required) The stripe size of the RAID set in megabytes. gluster_features_force_varlogsizecheck (required) Set this to true if you want to verify that your /var/log partition has sufficient free space during the deployment process. It is important to have sufficient space for logs, but it is not required to verify space requirements at deployment time if you plan to monitor space requirements carefully. gluster_set_selinux_labels (required) Ensures that volumes can be accessed when SELinux is enabled. Set this to true if SELinux is enabled on this host. B.1.5. Firewall and network infrastructure gluster_infra_fw_ports (required) A list of ports to open between all nodes, in the format <port>/<protocol> . gluster_infra_fw_permanent (required) Ensures the ports listed in gluster_infra_fw_ports are open after nodes are rebooted. Set this to true for production use cases. gluster_infra_fw_state (required) Enables the firewall. Set this to enabled for production use cases. gluster_infra_fw_zone (required) Specifies the firewall zone to which these gluster_infra_fw_\* parameters are applied. gluster_infra_fw_services (required) A list of services to allow through the firewall. Ensure glusterfs is defined here. B.2. Example node_prep_inventory.yml | [
"hc_nodes: hosts: new-host-backend-fqdn.example.com: [configuration specific to this host] vars: [configuration common to all hosts]",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: blacklist_mpath_devices: - sdb - sdc",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_vdo: - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', blockmapcachesize: '128M', writepolicy: 'auto' } - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '500G', emulate512: 'off', slabsize: '2G', blockmapcachesize: '128M', writepolicy: 'auto' }",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'}",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_cache_vars: - vgname: gluster_vg_sdb cachedisk: /dev/sdb,/dev/sde cachelvname: cachelv_thinpool_sdb cachethinpoolname: gluster_thinpool_sdb cachelvsize: '250G' cachemode: writethrough",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G",
"hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G",
"hc_nodes: vars: gluster_infra_disktype: RAID6",
"hc_nodes: vars: gluster_infra_diskcount: 10",
"hc_nodes: vars: gluster_infra_stripe_unit_size: 256",
"hc_nodes: vars: gluster_features_force_varlogsizecheck: false",
"hc_nodes: vars: gluster_set_selinux_labels: true",
"hc_nodes: vars: gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp",
"hc_nodes: vars: gluster_infra_fw_permanent: true",
"hc_nodes: vars: gluster_infra_fw_state: enabled",
"hc_nodes: vars: gluster_infra_fw_zone: public",
"hc_nodes: vars: gluster_infra_fw_services: - glusterfs",
"Section for Host Preparation Phase hc_nodes: hosts: # Host - The node which need to be prepared for replacement new-host-backend-fqdn.example.com : # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdc # Enable this section gluster_infra_vdo , if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: vdo_sdc , device: /dev/sdc , logicalsize: 5000G , emulate512: off , slabsize: 32G , # blockmapcachesize: 128M , writepolicy: auto } # - { name: vdo_sdd , device: /dev/sdd , logicalsize: 3000G , emulate512: off , slabsize: 32G , # blockmapcachesize: 128M , writepolicy: auto } # When dedupe and compression is enabled on the device, # use pvname for that device as /dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, thinpoolsize is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # poolmetadatasize is 16GB and that should be considered exclusive of # thinpoolsize gluster_infra_thinpools: - {vgname: gluster_vg_sdc , thinpoolname: gluster_thinpool_sdc , thinpoolsize: 500G , poolmetadatasize: 16G } - {vgname: gluster_vg_sdd , thinpoolname: gluster_thinpool_sdd , thinpoolsize: 500G , poolmetadatasize: 16G } # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: 250G # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G # Common configurations vars: # In case of IPv6 based deployment \"gluster_features_enable_ipv6\" needs to be enabled,below line needs to be uncommented, like: # gluster_features_enable_ipv6: true # Firewall setup gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs # Allowed values for gluster_infra_disktype - RAID6, RAID5, JBOD gluster_infra_disktype: RAID6 # gluster_infra_diskcount is the number of data disks in the RAID set. # Note for JBOD its 1 gluster_infra_diskcount: 10 gluster_infra_stripe_unit_size: 256 gluster_features_force_varlogsizecheck: false gluster_set_selinux_labels: true"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/understanding-the-node_prep_inventory-yml-file |
Chapter 3. ProjectRequest [project.openshift.io/v1] | Chapter 3. ProjectRequest [project.openshift.io/v1] Description ProjectRequest is the set of options necessary to fully qualify a project request Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string Description is the description to apply to a project displayName string DisplayName is the display name to apply to a project kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 3.2. API endpoints The following API endpoints are available: /apis/project.openshift.io/v1/projectrequests GET : list objects of kind ProjectRequest POST : create a ProjectRequest 3.2.1. /apis/project.openshift.io/v1/projectrequests Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list objects of kind ProjectRequest Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method POST Description create a ProjectRequest Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body ProjectRequest schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ProjectRequest schema 201 - Created ProjectRequest schema 202 - Accepted ProjectRequest schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/project_apis/projectrequest-project-openshift-io-v1 |
Chapter 1. Introduction to database servers | Chapter 1. Introduction to database servers A database server is a service that provides features of a database management system (DBMS). DBMS provides utilities for database administration and interacts with end users, applications, and databases. Red Hat Enterprise Linux 9 provides the following database management systems: MariaDB 10.5 MariaDB 10.11 - available since RHEL 9.4 MySQL 8.0 PostgreSQL 13 PostgreSQL 15 - available since RHEL 9.2 PostgreSQL 16 - available since RHEL 9.4 Redis 6 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_database_servers/introduction-to-databases_configuring-and-using-database-servers |
3.6. Tickless Kernel | 3.6. Tickless Kernel Previously, the Linux kernel periodically interrupted each CPU on a system at a predetermined frequency - 100 Hz, 250 Hz, or 1000 Hz, depending on the platform. The kernel queried the CPU about the processes that it was executing, and used the results for process accounting and load balancing. Known as the timer tick , the kernel performed this interrupt regardless of the power state of the CPU. Therefore, even an idle CPU was responding to up to 1000 of these requests every second. On systems that implemented power saving measures for idle CPUs, the timer tick prevented the CPU from remaining idle long enough for the system to benefit from these power savings. The kernel in Red Hat Enterprise Linux 6 runs tickless : that is, it replaces the old periodic timer interrupts with on-demand interrupts. Therefore, idle CPUs are allowed to remain idle until a new task is queued for processing, and CPUs that have entered lower power states can remain in these states longer. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/Tickless-kernel |
Chapter 4. Performing maintenance on Compute nodes and Controller nodes with Instance HA | Chapter 4. Performing maintenance on Compute nodes and Controller nodes with Instance HA To perform maintenance on a Compute node or a Controller node with Instance HA, stop the node by setting it in standby mode and disabling the Pacemaker resources on the node. After you complete the maintenance work, you start the node and check that the Pacemaker resources are healthy. Prerequisites A running overcloud with Instance HA enabled Procedure Log in to a Controller node and stop the Compute or Controller node: Important You must log in to a different node from the node you want to stop. Disable the Pacemaker resources on the node: Perform any maintenance work on the node. Restore the IPMI connection and start the node. Wait until the node is ready before proceeding. Enable the Pacemaker resources on the node and start the node: If you set the node to maintenance mode, source the credential file for your overcloud and unset the node from maintenance mode: Verification Check that the Pacemaker resources are active and healthy: If any Pacemaker resources fail to start during the startup process, run the pcs resource cleanup command to reset the status and the fail count of the resource. If you evacuated instances from a Compute node before you stopped the node, check that the instances are migrated to a different node: | [
"pcs node standby <node UUID>",
"pcs resource disable <ocf::pacemaker:remote on the node>",
"pcs resource enable <ocf::pacemaker:remote on the node> pcs node unstandby <node UUID>",
"source stackrc openstack baremetal node maintenance unset <baremetal node UUID>",
"pcs status",
"openstack server list --long nova migration-list"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_for_compute_instances/proc_performing-maintenance-compute-controller-nodes-with-instanceha_rhosp |
Chapter 10. Updating OpenShift Logging | Chapter 10. Updating OpenShift Logging 10.1. Supported Versions For version compatibility and support information, see Red Hat OpenShift Container Platform Life Cycle Policy To upgrade from cluster logging in OpenShift Container Platform version 4.6 and earlier to OpenShift Logging 5.x, you update the OpenShift Container Platform cluster to version 4.7 or 4.8. Then, you update the following operators: From Elasticsearch Operator 4.x to OpenShift Elasticsearch Operator 5.x From Cluster Logging Operator 4.x to Red Hat OpenShift Logging Operator 5.x To upgrade from a version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions. 10.2. Updating Logging to the current version To update Logging to the current version, you change the subscriptions for the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. Important You must update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. You must also update both Operators to the same version. If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, you delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again. Prerequisites The OpenShift Container Platform version is 4.7 or later. The Logging status is healthy: All pods are ready . The Elasticsearch cluster is healthy. Your Elasticsearch and Kibana data is backed up. Procedure Update the OpenShift Elasticsearch Operator: From the web console, click Operators Installed Operators . Select the openshift-Operators-redhat project. Click the OpenShift Elasticsearch Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.x and click Save . Wait for a few seconds, then click Operators Installed Operators . Verify that the OpenShift Elasticsearch Operator version is 5.x.x. Wait for the Status field to report Succeeded . Update the Red Hat OpenShift Logging Operator: From the web console, click Operators Installed Operators . Select the openshift-logging project. Click the Red Hat OpenShift Logging Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.x and click Save . Wait for a few seconds, then click Operators Installed Operators . Verify that the Red Hat OpenShift Logging Operator version is 5.y.z Wait for the Status field to report Succeeded . Check the logging components: Ensure that all Elasticsearch pods are in the Ready status: USD oc get pod -n openshift-logging --selector component=elasticsearch Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m Ensure that the Elasticsearch cluster is healthy: USD oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health { "cluster_name" : "elasticsearch", "status" : "green", } Ensure that the Elasticsearch cron jobs are created: USD oc project openshift-logging USD oc get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s Verify that the log store is updated to 5.x and the indices are green : USD oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices Verify that the output includes the app-00000x , infra-00000x , audit-00000x , .security indices. Example 10.1. Sample output with indices in a green status Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0 Verify that the log collector is updated: USD oc get ds collector -o json | grep collector Verify that the output includes a collectort container: "containerName": "collector" Verify that the log visualizer is updated to 5.x using the Kibana CRD: USD oc get kibana kibana -o json Verify that the output includes a Kibana pod with the ready status: Example 10.2. Sample output with a ready Kibana pod [ { "clusterCondition": { "kibana-5fdd766ffd-nb2jj": [ { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" }, { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" } ] }, "deployment": "kibana", "pods": { "failed": [], "notReady": [] "ready": [] }, "replicaSets": [ "kibana-5fdd766ffd" ], "replicas": 1 } ] | [
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get ds collector -o json | grep collector",
"\"containerName\": \"collector\"",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/cluster-logging-upgrading |
3.4. CPU Power Saving Policies | 3.4. CPU Power Saving Policies cpupower provides ways to regulate your processor's power saving policies. Use the following options with the cpupower set command: --perf-bias <0-15> Allows software on supported Intel processors to more actively contribute to determining the balance between optimum performance and saving power. This does not override other power saving policies. Assigned values range from 0 to 15, where 0 is optimum performance and 15 is optimum power efficiency. By default, this option applies to all cores. To apply it only to individual cores, add the --cpu <cpulist> option. --sched-mc <0|1|2> Restricts the use of power by system processes to the cores in one CPU package before other CPU packages are drawn from. 0 sets no restrictions, 1 initially employs only a single CPU package, and 2 does this in addition to favouring semi-idle CPU packages for handling task wakeups. --sched-smt <0|1|2> Restricts the use of power by system processes to the thread siblings of one CPU core before drawing on other cores. 0 sets no restrictions, 1 initially employs only a single CPU package, and 2 does this in addition to favouring semi-idle CPU packages for handling task wakeups. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/cpu_power_saving |
1.4. Bookmarks | 1.4. Bookmarks 1.4.1. Saving a Query String as a Bookmark A bookmark can be used to remember a search query, and shared with other users. Procedure Enter the desired search query in the search bar and perform the search. Click the star-shaped Bookmark button to the right of the search bar. This opens the New Bookmark window. Enter the Name of the bookmark. Edit the Search string field, if required. Click OK . Click the Bookmarks icon ( ) in the header bar to find and select the bookmark. 1.4.2. Editing a Bookmark You can modify the name and search string of a bookmark. Procedure Click the Bookmarks icon ( ) in the header bar. Select a bookmark and click Edit . Change the Name and Search string fields as necessary. Click OK . 1.4.3. Deleting a Bookmark When a bookmark is no longer needed, remove it. Procedure Click the Bookmarks icon ( ) in the header bar. Select a bookmark and click Remove . Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-Bookmarks |
Chapter 11. Using service accounts in applications | Chapter 11. Using service accounts in applications 11.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods Applications inside containers to make API calls for discovery purposes External applications to make API calls for monitoring or integration purposes Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 11.2. Default service accounts Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project. 11.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 11.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. Note The builder service account is not created if the Build cluster capability is not enabled. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. Note The deployer service account is not created if the DeploymentConfig cluster capability is not enabled. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any image stream in the project using the internal container image registry. 11.2.3. Automatically generated image pull secrets By default, OpenShift Container Platform creates an image pull secret for each service account. Note Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created. After upgrading to 4.17, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 11.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none> | [
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/using-service-accounts |
Chapter 6. Remediating nodes with Node Health Checks | Chapter 6. Remediating nodes with Node Health Checks You can use the Node Health Check Operator to identify unhealthy nodes. The Operator then uses other operators to remediate the unhealthy nodes. The Node Health Check Operator installs the Self Node Remediation Operator as a default remediation provider. For more information on the Self Node Remediation Operator, see the Using Self Node Remediation chapter. The Node Health Check Operator can also be used with other remediation providers, including: The Fence Agents Remediation Operator. The Machine Deletion Remediation Operator. Note Due to the existence of preinstalled machine health checks on Red Hat OpenShift Service on AWS (ROSA) clusters, the Node Health Check Operator is unable to function in such an environment. 6.1. About the Node Health Check Operator The Node Health Check Operator detects the health of the nodes in a cluster. The NodeHealthCheck controller creates the NodeHealthCheck custom resource (CR), which defines a set of criteria and thresholds to determine the health of a node. When the Node Health Check Operator detects an unhealthy node, it creates a remediation CR that triggers the remediation provider. For example, the controller creates the SelfNodeRemediation CR, which triggers the Self Node Remediation Operator to remediate the unhealthy node. The NodeHealthCheck CR resembles the following YAML file, with self-node-remediation as the remediation provider: apiVersion: remediation.medik8s.io/v1alpha1 kind: NodeHealthCheck metadata: name: nodehealthcheck-sample spec: minHealthy: 51% 1 pauseRequests: 2 - <pause-test-cluster> remediationTemplate: 3 apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: self-node-remediation-resource-deletion-template namespace: openshift-workload-availability kind: SelfNodeRemediationTemplate escalatingRemediations: 4 - remediationTemplate: apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: self-node-remediation-resource-deletion-template namespace: openshift-workload-availability kind: SelfNodeRemediationTemplate order: 1 timeout: 300s selector: 5 matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists unhealthyConditions: 6 - type: Ready status: "False" duration: 300s 7 - type: Ready status: Unknown duration: 300s 8 1 Specifies the amount of healthy nodes(in percentage or number) required for a remediation provider to concurrently remediate nodes in the targeted pool. If the number of healthy nodes equals to or exceeds the limit set by minHealthy , remediation occurs. The default value is 51%. 2 Prevents any new remediation from starting, while allowing any ongoing remediations to persist. The default value is empty. However, you can enter an array of strings that identify the cause of pausing the remediation. For example, pause-test-cluster . Note During the upgrade process, nodes in the cluster might become temporarily unavailable and get identified as unhealthy. In the case of worker nodes, when the Operator detects that the cluster is upgrading, it stops remediating new unhealthy nodes to prevent such nodes from rebooting. 3 Specifies a remediation template from the remediation provider. For example, from the Self Node Remediation Operator. remediationTemplate is mutually exclusive with escalatingRemediations . 4 Specifies a list of RemediationTemplates with order and timeout fields. To obtain a healthy node, use this field to sequence and configure multiple remediations. This strategy increases the likelihood of obtaining a healthy node, instead of depending on a single remediation that might not be successful. The order field determines the order in which the remediations are invoked (lower order = earlier invocation). The timeout field determines when the remediation is invoked. escalatingRemediations is mutually exclusive with remediationTemplate . Note When escalatingRemediations is used the remediation providers, Self Node Remediation Operator and Fence Agents Remediation Operator, can be used multiple times with different remediationTemplate configurations. However, you can not use the same Machine Deletion Remediation configuration with different remediationTemplate configurations. 5 Specifies a selector that matches labels or expressions that you want to check. Avoid selecting both control-plane and worker nodes in one CR. 6 Specifies a list of the conditions that determine whether a node is considered unhealthy. 7 8 Specifies the timeout duration for a node condition. If a condition is met for the duration of the timeout, the node will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy node. The NodeHealthCheck CR resembles the following YAML file, with metal3 as the remediation provider: apiVersion: remediation.medik8s.io/v1alpha1 kind: NodeHealthCheck metadata: name: nhc-worker-metal3 spec: minHealthy: 30% remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation namespace: openshift-machine-api selector: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists unhealthyConditions: - duration: 300s status: 'False' type: Ready - duration: 300s status: 'Unknown' type: Ready Note The matchExpressions are examples only; you must map your machine groups based on your specific needs. The Metal3RemediationTemplate resembles the following YAML file, with metal3 as the remediation provider: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation namespace: openshift-machine-api spec: template: spec: strategy: retryLimit: 1 timeout: 5m0s type: Reboot Note In addition to creating a NodeHealthCheck CR, you must also create the Metal3RemediationTemplate . 6.1.1. Understanding the Node Health Check Operator workflow When a node is identified as unhealthy, the Node Health Check Operator checks how many other nodes are unhealthy. If the number of healthy nodes exceeds the amount that is specified in the minHealthy field of the NodeHealthCheck CR, the controller creates a remediation CR from the details that are provided in the external remediation template by the remediation provider. After remediation, the kubelet updates the node's health status. When the node turns healthy, the controller deletes the external remediation template. 6.1.2. About how node health checks prevent conflicts with machine health checks When both, node health checks and machine health checks are deployed, the node health check avoids conflict with the machine health check. Note Red Hat OpenShift deploys machine-api-termination-handler as the default MachineHealthCheck resource. The following list summarizes the system behavior when node health checks and machine health checks are deployed: If only the default machine health check exists, the node health check continues to identify unhealthy nodes. However, the node health check ignores unhealthy nodes in a Terminating state. The default machine health check handles the unhealthy nodes with a Terminating state. Example log message INFO MHCChecker ignoring unhealthy Node, it is terminating and will be handled by MHC {"NodeName": "node-1.example.com"} If the default machine health check is modified (for example, the unhealthyConditions is Ready ), or if additional machine health checks are created, the node health check is disabled. Example log message When, again, only the default machine health check exists, the node health check is re-enabled. Example log message 6.2. Control plane fencing In earlier releases, you could enable Self Node Remediation and Node Health Check on worker nodes. In the event of node failure, you can now also follow remediation strategies on control plane nodes. Do not use the same NodeHealthCheck CR for worker nodes and control plane nodes. Grouping worker nodes and control plane nodes together can result in incorrect evaluation of the minimum healthy node count, and cause unexpected or missing remediations. This is because of the way the Node Health Check Operator handles control plane nodes. You should group the control plane nodes in their own group and the worker nodes in their own group. If required, you can also create multiple groups of worker nodes. Considerations for remediation strategies: Avoid Node Health Check configurations that involve multiple configurations overlapping the same nodes because they can result in unexpected behavior. This suggestion applies to both worker and control plane nodes. The Node Health Check Operator implements a hardcoded limitation of remediating a maximum of one control plane node at a time. Multiple control plane nodes should not be remediated at the same time. 6.3. Installing the Node Health Check Operator by using the web console You can use the Red Hat OpenShift web console to install the Node Health Check Operator. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, navigate to Operators OperatorHub . Select the Node Health Check Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-workload-availability namespace. Ensure that the Console plug-in is set to Enable . Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-workload-availability namespace and that its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-workload-availability project that are reporting issues. 6.4. Installing the Node Health Check Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Node Health Check Operator. You can install the Node Health Check Operator in your own namespace or in the openshift-workload-availability namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace custom resource (CR) for the Node Health Check Operator: Define the Namespace CR and save the YAML file, for example, node-health-check-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability To create the Namespace CR, run the following command: USD oc create -f node-health-check-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, workload-availability-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability To create the OperatorGroup CR, run the following command: USD oc create -f workload-availability-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, node-health-check-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-health-check-operator namespace: openshift-workload-availability 1 spec: channel: stable 2 installPlanApproval: Manual 3 name: node-healthcheck-operator source: redhat-operators sourceNamespace: openshift-marketplace package: node-healthcheck-operator 1 Specify the Namespace where you want to install the Node Health Check Operator. To install the Node Health Check Operator in the openshift-workload-availability namespace, specify openshift-workload-availability in the Subscription CR. 2 Specify the channel name for your subscription. To upgrade to the latest version of the Node Health Check Operator, you must manually change the channel name for your subscription from candidate to stable . 3 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. To create the Subscription CR, run the following command: USD oc create -f node-health-check-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-workload-availability Example output NAME DISPLAY VERSION REPLACES PHASE node-healthcheck-operator.v0.7.0 Node Health Check Operator 0.7.0 node-healthcheck-operator.v0.6.1 Succeeded Verify that the Node Health Check Operator is up and running: USD oc get deployment -n openshift-workload-availability Example output NAME READY UP-TO-DATE AVAILABLE AGE node-healthcheck-controller-manager 2/2 2 2 10d 6.5. Creating a node health check Using the web console, you can create a node health check to identify unhealthy nodes and specify the remediation type and strategy to fix them. Procedure From the Administrator perspective of the Red Hat OpenShift web console, click Compute NodeHealthChecks CreateNodeHealthCheck . Specify whether to configure the node health check using the Form view or the YAML view . Enter a Name for the node health check. The name must consist of lower case, alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character. Specify the Remediator type, and Self node remediation or Other . The Self node remediation option is part of the Self Node Remediation Operator that is installed with the Node Health Check Operator. Selecting Other requires an API version , Kind , Name , and Namespace to be entered, which then points to the remediation template resource of a remediator. Make a Nodes selection by specifying the labels of the nodes you want to remediate. The selection matches labels that you want to check. If more than one label is specified, the nodes must contain each label. The default value is empty, which selects both worker and control-plane nodes. Note When creating a node health check with the Self Node Remediation Operator, you must select either node-role.kubernetes.io/worker or node-role.kubernetes.io/control-plane as the value. Specify the minimum number of healthy nodes, using either a percentage or a number, required for a NodeHealthCheck to remediate nodes in the targeted pool. If the number of healthy nodes equals to or exceeds the limit set by Min healthy , remediation occurs. The default value is 51%. Specify a list of Unhealthy conditions that if a node meets determines whether the node is considered unhealthy, and requires remediation. You can specify the Type , Status and Duration . You can also create your own custom type. Click Create to create the node health check. Verification Navigate to the Compute NodeHealthCheck page and verify that the corresponding node health check is listed, and their status displayed. Once created, node health checks can be paused, modified, and deleted. 6.6. Gathering data about the Node Health Check Operator To collect debugging information about the Node Health Check Operator, use the must-gather tool. For information about the must-gather image for the Node Health Check Operator, see Gathering data about specific features . 6.7. Additional resources Changing the update channel for an Operator Using Operator Lifecycle Manager on restricted networks . | [
"apiVersion: remediation.medik8s.io/v1alpha1 kind: NodeHealthCheck metadata: name: nodehealthcheck-sample spec: minHealthy: 51% 1 pauseRequests: 2 - <pause-test-cluster> remediationTemplate: 3 apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: self-node-remediation-resource-deletion-template namespace: openshift-workload-availability kind: SelfNodeRemediationTemplate escalatingRemediations: 4 - remediationTemplate: apiVersion: self-node-remediation.medik8s.io/v1alpha1 name: self-node-remediation-resource-deletion-template namespace: openshift-workload-availability kind: SelfNodeRemediationTemplate order: 1 timeout: 300s selector: 5 matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists unhealthyConditions: 6 - type: Ready status: \"False\" duration: 300s 7 - type: Ready status: Unknown duration: 300s 8",
"apiVersion: remediation.medik8s.io/v1alpha1 kind: NodeHealthCheck metadata: name: nhc-worker-metal3 spec: minHealthy: 30% remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation namespace: openshift-machine-api selector: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists unhealthyConditions: - duration: 300s status: 'False' type: Ready - duration: 300s status: 'Unknown' type: Ready",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation namespace: openshift-machine-api spec: template: spec: strategy: retryLimit: 1 timeout: 5m0s type: Reboot",
"INFO MHCChecker ignoring unhealthy Node, it is terminating and will be handled by MHC {\"NodeName\": \"node-1.example.com\"}",
"INFO controllers.NodeHealthCheck disabling NHC in order to avoid conflict with custom MHCs configured in the cluster {\"NodeHealthCheck\": \"/nhc-worker-default\"}",
"INFO controllers.NodeHealthCheck re-enabling NHC, no conflicting MHC configured in the cluster {\"NodeHealthCheck\": \"/nhc-worker-default\"}",
"apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability",
"oc create -f node-health-check-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability",
"oc create -f workload-availability-operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-health-check-operator namespace: openshift-workload-availability 1 spec: channel: stable 2 installPlanApproval: Manual 3 name: node-healthcheck-operator source: redhat-operators sourceNamespace: openshift-marketplace package: node-healthcheck-operator",
"oc create -f node-health-check-subscription.yaml",
"oc get csv -n openshift-workload-availability",
"NAME DISPLAY VERSION REPLACES PHASE node-healthcheck-operator.v0.7.0 Node Health Check Operator 0.7.0 node-healthcheck-operator.v0.6.1 Succeeded",
"oc get deployment -n openshift-workload-availability",
"NAME READY UP-TO-DATE AVAILABLE AGE node-healthcheck-controller-manager 2/2 2 2 10d"
] | https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/remediation_fencing_and_maintenance/node-health-check-operator |
17.13. Dynamically Changing a Host Physical Machine or a Network Bridge that is Attached to a Virtual NIC | 17.13. Dynamically Changing a Host Physical Machine or a Network Bridge that is Attached to a Virtual NIC This section demonstrates how to move the vNIC of a guest virtual machine from one bridge to another while the guest virtual machine is running without compromising the guest virtual machine Prepare guest virtual machine with a configuration similar to the following: Prepare an XML file for interface update: Start the guest virtual machine, confirm the guest virtual machine's network functionality, and check that the guest virtual machine's vnetX is connected to the bridge you indicated. Update the guest virtual machine's network with the new interface parameters with the following command: On the guest virtual machine, run service network restart . The guest virtual machine gets a new IP address for virbr1. Check the guest virtual machine's vnet0 is connected to the new bridge(virbr1) | [
"<interface type='bridge'> <mac address='52:54:00:4a:c9:5e'/> <source bridge='virbr0'/> <model type='virtio'/> </interface>",
"cat br1.xml",
"<interface type='bridge'> <mac address='52:54:00:4a:c9:5e'/> <source bridge='virbr1'/> <model type='virtio'/> </interface>",
"brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254007da9f2 yes virbr0-nic vnet0 virbr1 8000.525400682996 yes virbr1-nic",
"virsh update-device test1 br1.xml Device updated successfully",
"brctl show bridge name bridge id STP enabled interfaces virbr0 8000.5254007da9f2 yes virbr0-nic virbr1 8000.525400682996 yes virbr1-nic vnet0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-dynamically_changing_a_host_physical_machine_or_a_network_bridge_that_is_attached_to_a_virtual_nic |
Chapter 16. Managing container images by using the RHEL web console | Chapter 16. Managing container images by using the RHEL web console You can use the RHEL web console web-based interface to pull, prune, or delete your container images. 16.1. Pulling container images in the web console You can download container images to your local system and use them to create your containers. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Images table, click the overflow menu in the upper-right corner and select Download new image . The Search for an image dialog box appears. In the Search for field, enter the name of the image or specify its description. In the in drop-down list, select the registry from which you want to pull the image. Optional: In the Tag field, enter the tag of the image. Click Download . Verification Click Podman containers in the main menu. You can see the newly downloaded image in the Images table. Note You can create a container from the downloaded image by clicking the Create container in the Images table. To create the container, follow steps 3-8 in Creating containers in the web console . 16.2. Pruning container images in the web console You can remove all unused images that do not have any containers based on it. Prerequisites At least one container image is pulled. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Images table, click the overflow menu in the upper-right corner and select Prune unused images . The pop-up window with the list of images appears. Click Prune to confirm your choice. Verification Click Podman containers in the main menu. The deleted images should not be listed in the Images table. 16.3. Deleting container images in the web console You can delete a previously pulled container image using the web console. Prerequisites At least one container image is pulled. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-podman add-on is installed: Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Podman containers in the main menu. In the Images table, select the image you want to delete and click the overflow menu and select Delete . The window appears. Click Delete tagged images to confirm your choice. Verification Click the Podman containers in the main menu. The deleted container should not be listed in the Images table. | [
"yum install cockpit-podman",
"yum install cockpit-podman",
"yum install cockpit-podman"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/managing-container-images-by-using-the-rhel-web-console_building-running-and-managing-containers |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configure_data_sources/making-open-source-more-inclusive |
Chapter 27. Using snapshots on Stratis file systems | Chapter 27. Using snapshots on Stratis file systems You can use snapshots on Stratis file systems to capture file system state at arbitrary times and restore it in the future. Important Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 27.1. Characteristics of Stratis snapshots In Stratis, a snapshot is a regular Stratis file system created as a copy of another Stratis file system. The snapshot initially contains the same file content as the original file system, but can change as the snapshot is modified. Whatever changes you make to the snapshot will not be reflected in the original file system. The current snapshot implementation in Stratis is characterized by the following: A snapshot of a file system is another file system. A snapshot and its origin are not linked in lifetime. A snapshotted file system can live longer than the file system it was created from. A file system does not have to be mounted to create a snapshot from it. Each snapshot uses around half a gigabyte of actual backing storage, which is needed for the XFS log. 27.2. Creating a Stratis snapshot You can create a Stratis file system as a snapshot of an existing Stratis file system. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis file system. See Creating a Stratis file system . Procedure Create a Stratis snapshot: Additional resources stratis(8) man page on your system 27.3. Accessing the content of a Stratis snapshot You can mount a snapshot of a Stratis file system to make it accessible for read and write operations. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis file system . Procedure To access the snapshot, mount it as a regular file system from the /dev/stratis/ my-pool / directory: Additional resources Mounting a Stratis file system mount(8) man page on your system 27.4. Reverting a Stratis file system to a snapshot You can revert the content of a Stratis file system to the state captured in a Stratis snapshot. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis snapshot . Procedure Optional: Back up the current state of the file system to be able to access it later: Unmount and remove the original file system: Create a copy of the snapshot under the name of the original file system: Mount the snapshot, which is now accessible with the same name as the original file system: The content of the file system named my-fs is now identical to the snapshot my-fs-snapshot . Additional resources stratis(8) man page on your system 27.5. Removing a Stratis snapshot You can remove a Stratis snapshot from a pool. Data on the snapshot are lost. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis snapshot. See Creating a Stratis snapshot . Procedure Unmount the snapshot: Destroy the snapshot: Additional resources stratis(8) man page on your system | [
"stratis fs snapshot my-pool my-fs my-fs-snapshot",
"mount /dev/stratis/ my-pool / my-fs-snapshot mount-point",
"stratis filesystem snapshot my-pool my-fs my-fs-backup",
"umount /dev/stratis/ my-pool / my-fs stratis filesystem destroy my-pool my-fs",
"stratis filesystem snapshot my-pool my-fs-snapshot my-fs",
"mount /dev/stratis/ my-pool / my-fs mount-point",
"umount /dev/stratis/ my-pool / my-fs-snapshot",
"stratis filesystem destroy my-pool my-fs-snapshot"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/using-snapshots-on-stratis-file-systems |
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator | Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator 3.1. Release notes 3.1.1. Custom Metrics Autoscaler Operator release notes The release notes for the Custom Metrics Autoscaler Operator for Red Hat OpenShift describe new features and enhancements, deprecated features, and known issues. The Custom Metrics Autoscaler Operator uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OpenShift Container Platform horizontal pod autoscaler (HPA). Note The Custom Metrics Autoscaler Operator for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 3.1.1.1. Supported versions The following table defines the Custom Metrics Autoscaler Operator versions for each OpenShift Container Platform version. Version OpenShift Container Platform version General availability 2.14.1 4.16 General availability 2.14.1 4.15 General availability 2.14.1 4.14 General availability 2.14.1 4.13 General availability 2.14.1 4.12 General availability 3.1.1.2. Custom Metrics Autoscaler Operator 2.14.1-467 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-467 provides a CVE and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:7348 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.1.2.1. Bug fixes Previously, the root file system of the Custom Metrics Autoscaler Operator pod was writable, which is unnecessary and could present security issues. This update makes the pod root file system read-only, which addresses the potential security issue. ( OCPBUGS-37989 ) 3.1.2. Release notes for past releases of the Custom Metrics Autoscaler Operator The following release notes are for versions of the Custom Metrics Autoscaler Operator. For the current version, see Custom Metrics Autoscaler Operator release notes . 3.1.2.1. Custom Metrics Autoscaler Operator 2.14.1-454 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-454 provides a CVE, a new feature, and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:5865 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.1.1. New features and enhancements 3.1.2.1.1.1. Support for the Cron trigger with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use the Cron trigger to scale pods based on an hourly schedule. When your specified time frame starts, the Custom Metrics Autoscaler Operator scales pods to your desired amount. When the time frame ends, the Operator scales back down to the level. For more information, see Understanding the Cron trigger . 3.1.2.1.2. Bug fixes Previously, if you made changes to audit configuration parameters in the KedaController custom resource, the keda-metrics-server-audit-policy config map would not get updated. As a consequence, you could not change the audit configuration parameters after the initial deployment of the Custom Metrics Autoscaler. With this fix, changes to the audit configuration now render properly in the config map, allowing you to change the audit configuration any time after installation. ( OCPBUGS-32521 ) 3.1.2.2. Custom Metrics Autoscaler Operator 2.13.1 release notes This release of the Custom Metrics Autoscaler Operator 2.13.1-421 provides a new feature and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:4837 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.2.1. New features and enhancements 3.1.2.2.1.1. Support for custom certificates with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use custom service CA certificates to connect securely to TLS-enabled metrics sources, such as an external Kafka cluster or an external Prometheus service. By default, the Operator uses automatically-generated service certificates to connect to on-cluster services only. There is a new field in the KedaController object that allows you to load custom server CA certificates for connecting to external services by using config maps. For more information, see Custom CA certificates for the Custom Metrics Autoscaler . 3.1.2.2.2. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. Scaled objects containing cron triggers are currently not supported for the custom metrics autoscaler. ( OCPBUGS-34018 ) 3.1.2.3. Custom Metrics Autoscaler Operator 2.12.1-394 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-394 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:2901 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.3.1. Bug fixes Previously, the protojson.Unmarshal function entered into an infinite loop when unmarshaling certain forms of invalid JSON. This condition could occur when unmarshaling into a message that contains a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option is set. This release fixes this issue. ( OCPBUGS-30305 ) Previously, when parsing a multipart form, either explicitly with the Request.ParseMultipartForm method or implicitly with the Request.FormValue , Request.PostFormValue , or Request.FormFile method, the limits on the total size of the parsed form were not applied to the memory consumed. This could cause memory exhaustion. With this fix, the parsing process now correctly limits the maximum size of form lines while reading a single form line. ( OCPBUGS-30360 ) Previously, when following an HTTP redirect to a domain that is not on a matching subdomain or on an exact match of the initial domain, an HTTP client would not forward sensitive headers, such as Authorization or Cookie . For example, a redirect from example.com to www.example.com would forward the Authorization header, but a redirect to www.example.org would not forward the header. This release fixes this issue. ( OCPBUGS-30365 ) Previously, verifying a certificate chain that contains a certificate with an unknown public key algorithm caused the certificate verification process to panic. This condition affected all crypto and Transport Layer Security (TLS) clients and servers that set the Config.ClientAuth parameter to the VerifyClientCertIfGiven or RequireAndVerifyClientCert value. The default behavior is for TLS servers to not verify client certificates. This release fixes this issue. ( OCPBUGS-30370 ) Previously, if errors returned from the MarshalJSON method contained user-controlled data, an attacker could have used the data to break the contextual auto-escaping behavior of the HTML template package. This condition would allow for subsequent actions to inject unexpected content into the templates. This release fixes this issue. ( OCPBUGS-30397 ) Previously, the net/http and golang.org/x/net/http2 Go packages did not limit the number of CONTINUATION frames for an HTTP/2 request. This condition could result in excessive CPU consumption. This release fixes this issue. ( OCPBUGS-30894 ) 3.1.2.4. Custom Metrics Autoscaler Operator 2.12.1-384 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-384 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:2043 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.4.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-32395 ) 3.1.2.5. Custom Metrics Autoscaler Operator 2.12.1-376 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-376 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:1812 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.5.1. Bug fixes Previously, if invalid values such as nonexistent namespaces were specified in scaled object metadata, the underlying scaler clients would not free, or close, their client descriptors, resulting in a slow memory leak. This fix properly closes the underlying client descriptors when there are errors, preventing memory from leaking. ( OCPBUGS-30145 ) Previously the ServiceMonitor custom resource (CR) for the keda-metrics-apiserver pod was not functioning, because the CR referenced an incorrect metrics port name of http . This fix corrects the ServiceMonitor CR to reference the proper port name of metrics . As a result, the Service Monitor functions properly. ( OCPBUGS-25806 ) 3.1.2.6. Custom Metrics Autoscaler Operator 2.11.2-322 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-322 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2023:6144 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.6.1. Bug fixes Because the Custom Metrics Autoscaler Operator version 3.11.2-311 was released without a required volume mount in the Operator deployment, the Custom Metrics Autoscaler Operator pod would restart every 15 minutes. This fix adds the required volume mount to the Operator deployment. As a result, the Operator no longer restarts every 15 minutes. ( OCPBUGS-22361 ) 3.1.2.7. Custom Metrics Autoscaler Operator 2.11.2-311 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-311 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.11.2-311 were released in RHBA-2023:5981 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.7.1. New features and enhancements 3.1.2.7.1.1. Red Hat OpenShift Service on AWS (ROSA) and OpenShift Dedicated are now supported The Custom Metrics Autoscaler Operator 2.11.2-311 can be installed on OpenShift ROSA and OpenShift Dedicated managed clusters. versions of the Custom Metrics Autoscaler Operator could be installed only in the openshift-keda namespace. This prevented the Operator from being installed on OpenShift ROSA and OpenShift Dedicated clusters. This version of Custom Metrics Autoscaler allows installation to other namespaces such as openshift-operators or keda , enabling installation into ROSA and Dedicated clusters. 3.1.2.7.2. Bug fixes Previously, if the Custom Metrics Autoscaler Operator was installed and configured, but not in use, the OpenShift CLI reported the couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 error after any oc command was entered. The message, although harmless, could have caused confusion. With this fix, the Got empty response for: external.metrics... error no longer appears inappropriately. ( OCPBUGS-15779 ) Previously, any annotation or label change to objects managed by the Custom Metrics Autoscaler were reverted by Custom Metrics Autoscaler Operator any time the Keda Controller was modified, for example after a configuration change. This caused continuous changing of labels in your objects. The Custom Metrics Autoscaler now uses its own annotation to manage labels and annotations, and annotation or label are no longer inappropriately reverted. ( OCPBUGS-15590 ) 3.1.2.8. Custom Metrics Autoscaler Operator 2.10.1-267 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1-267 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1-267 were released in RHBA-2023:4089 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.8.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images did not contain time zone information. Because of this, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds now include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-15264 ) Previously, the Custom Metrics Autoscaler Operator would attempt to take ownership of all managed objects, including objects in other namespaces and cluster-scoped objects. Because of this, the Custom Metrics Autoscaler Operator was unable to create the role binding for reading the credentials necessary to be an API server. This caused errors in the kube-system namespace. With this fix, the Custom Metrics Autoscaler Operator skips adding the ownerReference field to any object in another namespace or any cluster-scoped object. As a result, the role binding is now created without any errors. ( OCPBUGS-15038 ) Previously, the Custom Metrics Autoscaler Operator added an ownerReferences field to the openshift-keda namespace. While this did not cause functionality problems, the presence of this field could have caused confusion for cluster administrators. With this fix, the Custom Metrics Autoscaler Operator does not add the ownerReference field to the openshift-keda namespace. As a result, the openshift-keda namespace no longer has a superfluous ownerReference field. ( OCPBUGS-15293 ) Previously, if you used a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter was set to none , the trigger would fail to scale. With this fix, the Custom Metrics Autoscaler for OpenShift now properly handles the none pod identity provider type. As a result, a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter sset to none now properly scales. ( OCPBUGS-15274 ) 3.1.2.9. Custom Metrics Autoscaler Operator 2.10.1 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1 were released in RHEA-2023:3199 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.9.1. New features and enhancements 3.1.2.9.1.1. Custom Metrics Autoscaler Operator general availability The Custom Metrics Autoscaler Operator is now generally available as of Custom Metrics Autoscaler Operator version 2.10.1. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.2.9.1.2. Performance metrics You can now use the Prometheus Query Language (PromQL) to query metrics on the Custom Metrics Autoscaler Operator. 3.1.2.9.1.3. Pausing the custom metrics autoscaling for scaled objects You can now pause the autoscaling of a scaled object, as needed, and resume autoscaling when ready. 3.1.2.9.1.4. Replica fall back for scaled objects You can now specify the number of replicas to fall back to if a scaled object fails to get metrics from the source. 3.1.2.9.1.5. Customizable HPA naming for scaled objects You can now specify a custom name for the horizontal pod autoscaler in scaled objects. 3.1.2.9.1.6. Activation and scaling thresholds Because the horizontal pod autoscaler (HPA) cannot scale to or from 0 replicas, the Custom Metrics Autoscaler Operator does that scaling, after which the HPA performs the scaling. You can now specify when the HPA takes over autoscaling, based on the number of replicas. This allows for more flexibility with your scaling policies. 3.1.2.10. Custom Metrics Autoscaler Operator 2.8.2-174 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2-174 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2-174 were released in RHEA-2023:1683 . Important The Custom Metrics Autoscaler Operator version 2.8.2-174 is a Technology Preview feature. 3.1.2.10.1. New features and enhancements 3.1.2.10.1.1. Operator upgrade support You can now upgrade from a prior version of the Custom Metrics Autoscaler Operator. See "Changing the update channel for an Operator" in the "Additional resources" for information on upgrading an Operator. 3.1.2.10.1.2. must-gather support You can now collect data about the Custom Metrics Autoscaler Operator and its components by using the OpenShift Container Platform must-gather tool. Currently, the process for using the must-gather tool with the Custom Metrics Autoscaler is different than for other operators. See "Gathering debugging data in the "Additional resources" for more information. 3.1.2.11. Custom Metrics Autoscaler Operator 2.8.2 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2 were released in RHSA-2023:1042 . Important The Custom Metrics Autoscaler Operator version 2.8.2 is a Technology Preview feature. 3.1.2.11.1. New features and enhancements 3.1.2.11.1.1. Audit Logging You can now gather and view audit logs for the Custom Metrics Autoscaler Operator and its associated components. Audit logs are security-relevant chronological sets of records that document the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 3.1.2.11.1.2. Scale applications based on Apache Kafka metrics You can now use the KEDA Apache kafka trigger/scaler to scale deployments based on an Apache Kafka topic. 3.1.2.11.1.3. Scale applications based on CPU metrics You can now use the KEDA CPU trigger/scaler to scale deployments based on CPU metrics. 3.1.2.11.1.4. Scale applications based on memory metrics You can now use the KEDA memory trigger/scaler to scale deployments based on memory metrics. 3.2. Custom Metrics Autoscaler Operator overview As a developer, you can use Custom Metrics Autoscaler Operator for Red Hat OpenShift to specify how OpenShift Container Platform should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. The Custom Metrics Autoscaler Operator is an optional Operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics. The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka metrics. The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers , also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OpenShift Container Platform can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object for a workload, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed. Note You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload. The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the minReplicaCount value in the custom metrics autoscaler CR to 0 , the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the activation phase . After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the scaling phase . Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with activation . For example, if the threshold parameter configures scaling, activationThreshold would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you can configure a higher activation phase to prevent scaling up or down if the metric is particularly low. The activation value has more priority than the scaling value in case of different decisions for each. For example, if the threshold is set to 10 , and the activationThreshold is 50 , if the metric reports 40 , the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances. Figure 3.1. Custom metrics autoscaler workflow You create or modify a scaled object custom resource for a workload on a cluster. The object contains the scaling configuration for that workload. Prior to accepting the new object, the OpenShift API server sends it to the custom metrics autoscaler admission webhooks process to ensure that the object is valid. If validation succeeds, the API server persists the object. The custom metrics autoscaler controller watches for new or modified scaled objects. When the OpenShift API server notifies the controller of a change, the controller monitors any external trigger sources, also known as data sources, that are specified in the object for changes to the metrics data. One or more scalers request scaling data from the external trigger source. For example, for a Kafka trigger type, the controller uses the Kafka scaler to communicate with a Kafka instance to obtain the data requested by the trigger. The controller creates a horizontal pod autoscaler object for the scaled object. As a result, the Horizontal Pod Autoscaler (HPA) Operator starts monitoring the scaling data associated with the trigger. The HPA requests scaling data from the cluster OpenShift API server endpoint. The OpenShift API server endpoint is served by the custom metrics autoscaler metrics adapter. When the metrics adapter receives a request for custom metrics, it uses a GRPC connection to the controller to request it for the most recent trigger data received from the scaler. The HPA makes scaling decisions based upon the data received from the metrics adapter and scales the workload up or down by increasing or decreasing the replicas. As a it operates, a workload can affect the scaling metrics. For example, if a workload is scaled up to handle work in a Kafka queue, the queue size decreases after the workload processes all the work. As a result, the workload is scaled down. If the metrics are in a range specified by the minReplicaCount value, the custom metrics autoscaler controller disables all scaling, and leaves the replica count at a fixed level. If the metrics exceed that range, the custom metrics autoscaler controller enables scaling and allows the HPA to scale the workload. While scaling is disabled, the HPA does not take any action. 3.2.1. Custom CA certificates for the Custom Metrics Autoscaler By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services. If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the KedaController custom resource as described in Installing the custom metrics autoscaler . The Operator loads those certificates on start-up and registers them as trusted by the Operator. The config maps can contain one or more certificate files that contain one or more PEM-encoded CA certificates. Or, you can use separate config maps for each certificate file. Note If you later update the config map to add additional certificates, you must restart the keda-operator-* pod for the changes to take effect. 3.3. Installing the custom metrics autoscaler You can use the OpenShift Container Platform web console to install the Custom Metrics Autoscaler Operator. The installation creates the following five CRDs: ClusterTriggerAuthentication KedaController ScaledJob ScaledObject TriggerAuthentication 3.3.1. Installing the custom metrics autoscaler You can use the following procedure to install the Custom Metrics Autoscaler Operator. Prerequisites Remove any previously-installed Technology Preview versions of the Cluster Metrics Autoscaler Operator. Remove any versions of the community-based KEDA. Also, remove the KEDA 1.x custom resource definitions by running the following commands: USD oc delete crd scaledobjects.keda.k8s.io USD oc delete crd triggerauthentications.keda.k8s.io Optional: If you need the Custom Metrics Autoscaler Operator to connect to off-cluster services, such as an external Kafka cluster or an external Prometheus service, put any required service CA certificates into a config map. The config map must exist in the same namespace where the Operator is installed. For example: USD oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Custom Metrics Autoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the All namespaces on the cluster (default) option is selected for Installation Mode . This installs the Operator in all namespaces. Ensure that the openshift-keda namespace is selected for Installed Namespace . OpenShift Container Platform creates the namespace, if not present in your cluster. Click Install . Verify the installation by listing the Custom Metrics Autoscaler Operator components: Navigate to Workloads Pods . Select the openshift-keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running. Navigate to Workloads Deployments to verify that the custom-metrics-autoscaler-operator deployment is running. Optional: Verify the installation in the OpenShift CLI using the following commands: USD oc get all -n openshift-keda The output appears similar to the following: Example output NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m Install the KedaController custom resource, which creates the required CRDs: In the OpenShift Container Platform web console, click Operators Installed Operators . Click Custom Metrics Autoscaler . On the Operator Details page, click the KedaController tab. On the KedaController tab, click Create KedaController and edit the file. kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: ["RequestReceived"] omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" serviceAccount: {} 1 Specifies a single namespace in which the Custom Metrics Autoscaler Operator should scale applications. Leave it blank or leave it empty to scale applications in all namespaces. This field should have a namespace or be empty. The default value is empty. 2 Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug , info , error . The default is info . 3 Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json . The default is console . 4 Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. 5 Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 for debug . The default is 0 . 6 Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. Click Create to create the KEDA controller. 3.4. Understanding custom metrics autoscaler triggers Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods. The custom metrics autoscaler currently supports the Prometheus, CPU, memory, Apache Kafka, and cron triggers. You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow. You can configure a certificate authority to use with your scaled objects or for all scalers in the cluster . 3.4.1. Understanding the Prometheus trigger You can scale pods based on Prometheus metrics, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring" for information on the configurations required to use the OpenShift Container Platform monitoring as a source for metrics. Note If Prometheus is collecting metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on. Example scaled object with a Prometheus target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: # ... triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job="test-app"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: false 9 unsafeSsl: false 10 1 Specifies Prometheus as the trigger type. 2 Specifies the address of the Prometheus server. This example uses OpenShift Container Platform monitoring. 3 Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if using OpenShift Container Platform monitoring as a source for the metrics. 4 Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique. 5 Specifies the value that triggers scaling. Must be specified as a quoted string value. 6 Specifies the Prometheus query to use. 7 Specifies the authentication method to use. Prometheus scalers support bearer authentication ( bearer ), basic authentication ( basic ), or TLS authentication ( tls ). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret. 8 Optional: Passes the X-Scope-OrgID header to multi-tenant Cortex or Mimir storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return. 9 Optional: Specifies how the trigger should proceed if the Prometheus target is lost. If true , the trigger continues to operate if the Prometheus target is lost. This is the default behavior. If false , the trigger returns an error if the Prometheus target is lost. 10 Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint. If false , the certificate check is performed. This is the default behavior. If true , the certificate check is not performed. Important Skipping the check is not recommended. 3.4.1.1. Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring You can use the installed OpenShift Container Platform Prometheus monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform. Note These steps are not required for an external Prometheus source. You must perform the following tasks, as described in this section: Create a service account to get a token. Create a role. Add that role to the service account. Reference the token in the trigger authentication object used by Prometheus. Prerequisites OpenShift Container Platform monitoring must be installed. Monitoring of user-defined workloads must be enabled in OpenShift Container Platform monitoring, as described in the Creating a user-defined workload monitoring config map section. The Custom Metrics Autoscaler Operator must be installed. Procedure Change to the project with the object you want to scale: USD oc project my-project Use the following command to create a service account, if your cluster does not have one: USD oc create serviceaccount <service_account> where: <service_account> Specifies the name of the service account. Use the following command to locate the token assigned to the service account: USD oc describe serviceaccount <service_account> where: <service_account> Specifies the name of the service account. Example output Name: thanos Namespace: my-project Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token-9g4n5 1 Events: <none> 1 Use this token in the trigger authentication. Create a trigger authentication with the service account token: Create a YAML file similar to the following: apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 1 - parameter: bearerToken 2 name: thanos-token-9g4n5 3 key: token 4 - parameter: ca name: thanos-token-9g4n5 key: ca.crt 1 Specifies that this object uses a secret for authorization. 2 Specifies the authentication parameter to supply by using the token. 3 Specifies the name of the token to use. 4 Specifies the key in the token to use with the specified parameter. Create the CR object: USD oc create -f <file-name>.yaml Create a role for reading Thanos metrics: Create a YAML file with the following parameters: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch Create the CR object: USD oc create -f <file-name>.yaml Create a role binding for reading Thanos metrics: Create a YAML file similar to the following: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: thanos-metrics-reader 1 namespace: my-project 2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 3 namespace: my-project 4 1 Specifies the name of the role you created. 2 Specifies the namespace of the object you want to scale. 3 Specifies the name of the service account to bind to the role. 4 Specifies the namespace of the object you want to scale. Create the CR object: USD oc create -f <file-name>.yaml You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in "Understanding how to add custom metrics autoscalers". To use OpenShift Container Platform monitoring as the source, in the trigger, or scaler, you must include the following parameters: triggers.type must be prometheus triggers.metadata.serverAddress must be https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 triggers.metadata.authModes must be bearer triggers.metadata.namespace must be set to the namespace of the object to scale triggers.authenticationRef must point to the trigger authentication resource specified in the step 3.4.2. Understanding the CPU trigger You can scale pods based on CPU metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the CPU usage that you specify. The autoscaler increases or decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. The memory trigger considers the memory utilization of the entire pod. If the pod has multiple containers, the memory trigger considers the total memory utilization of all containers in the pod. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a CPU target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: # ... triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4 1 Specifies CPU as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Specifies the minimum number of replicas when scaling down. For a CPU trigger, enter a value of 1 or greater, because the HPA cannot scale to zero if you are using only CPU metrics. 3.4.3. Understanding the memory trigger You can scale pods based on memory metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the average memory usage that you specify. The autoscaler increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. The memory trigger considers the memory utilization of entire pod. If the pod has multiple containers, the memory utilization is the sum of all of the containers. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a memory target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: # ... triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4 1 Specifies memory as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Optional: Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. In this example, only the container named api is to be scaled. 3.4.4. Understanding the Kafka trigger You can scale pods based on an Apache Kafka topic or other services that support the Kafka protocol. The custom metrics autoscaler does not scale higher than the number of Kafka partitions, unless you set the allowIdleConsumers parameter to true in the scaled object or scaled job. Note If the number of consumer groups exceeds the number of partitions in a topic, the extra consumer groups remain idle. To avoid this, by default the number of replicas does not exceed: The number of partitions on a topic, if a topic is specified The number of partitions of all topics in the consumer group, if no topic is specified The maxReplicaCount specified in scaled object or scaled job CR You can use the allowIdleConsumers parameter to disable these default behaviors. Example scaled object with a Kafka target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: # ... triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13 1 Specifies Kafka as the trigger type. 2 Specifies the name of the Kafka topic on which Kafka is processing the offset lag. 3 Specifies a comma-separated list of Kafka brokers to connect to. 4 Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag. 5 Optional: Specifies the average target value that triggers scaling. Must be specified as a quoted string value. The default is 5 . 6 Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. 7 Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: latest and earliest . The default is latest . 8 Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic. If true , the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers. If false , the number of Kafka replicas cannot exceed the number of partitions on a topic. This is the default. 9 Specifies how the trigger behaves when a Kafka partition does not have a valid offset. If true , the consumers are scaled to zero for that partition. If false , the scaler keeps a single consumer for that partition. This is the default. 10 Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the polling cycle. If true , the scaler excludes partition lag in these partitions. If false , the trigger includes all consumer lag in all partitions. This is the default. 11 Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is 1.0.0 . 12 Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions. 13 Optional: Specifies whether to use TSL client authentication for Kafka. The default is disable . For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications". 3.4.5. Understanding the Cron trigger You can scale pods based on a time range. When the time range starts, the custom metrics autoscaler scales the pods associated with an object from the configured minimum number of pods to the specified number of desired pods. At the end of the time range, the pods are scaled back to the configured minimum. The time period must be configured in cron format . The following example scales the pods associated with this scaled object from 0 to 100 from 6:00 AM to 6:30 PM India Standard Time. Example scaled object with a Cron trigger apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: "0 6 * * *" 5 end: "30 18 * * *" 6 desiredReplicas: "100" 7 1 Specifies the minimum number of pods to scale down to at the end of the time frame. 2 Specifies the maximum number of replicas when scaling up. This value should be the same as desiredReplicas . The default is 100 . 3 Specifies a Cron trigger. 4 Specifies the timezone for the time frame. This value must be from the IANA Time Zone Database . 5 Specifies the start of the time frame. 6 Specifies the end of the time frame. 7 Specifies the number of pods to scale to between the start and end of the time frame. This value should be the same as maxReplicaCount . 3.5. Understanding custom metrics autoscaler trigger authentications A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OpenShift Container Platform secrets, platform-native pod authentication mechanisms, environment variables, and so on. You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace. Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces. Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object. Example secret for Basic authentication apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: "dXNlcm5hbWU=" 1 password: "cGFzc3dvcmQ=" 1 User name and password to supply to the trigger authentication. The values in a data stanza must be base-64 encoded. Example trigger authentication using a secret for Basic authentication kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example cluster trigger authentication with a secret for Basic authentication kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Note that no namespace is used with a cluster trigger authentication. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example secret with certificate authority (CA) details apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t... 1 Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded. 2 Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded. Example trigger authentication using a secret for CA details kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. 6 Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint. 7 Specifies the name of the secret to use. 8 Specifies the key in the secret to use with the specified parameter. Example secret with a bearer token apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" 1 1 Specifies a bearer token to use with bearer authentication. The value in a data stanza must be base-64 encoded. Example trigger authentication with a bearer token kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the token to use with the specified parameter. Example trigger authentication with an environment variable kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint. 3 Specify the parameter to set with this variable. 4 Specify the name of the environment variable. 5 Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object. Example trigger authentication with pod authentication providers kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint. 3 Specifies a pod identity. Supported values are none , azure , gcp , aws-eks , or aws-kiam . The default is none . Additional resources For information about OpenShift Container Platform secrets, see Providing sensitive data to pods . 3.5.1. Using trigger authentications You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you are using a secret, the Secret object must exist, for example: Example secret apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD> Procedure Create the TriggerAuthentication or ClusterTriggerAuthentication object. Create a YAML file that defines the object: Example trigger authentication with a secret kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD Create the TriggerAuthentication object: USD oc create -f <filename>.yaml Create or edit a ScaledObject YAML file that uses the trigger authentication: Create a YAML file that defines the object by running the following command: Example scaled object with a trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify TriggerAuthentication . TriggerAuthentication is the default. Example scaled object with a cluster trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify ClusterTriggerAuthentication . Create the scaled object by running the following command: USD oc apply -f <filename> 3.6. Pausing the custom metrics autoscaler for a scaled object You can pause and restart the autoscaling of a workload, as needed. For example, you might want to pause autoscaling before performing cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. 3.6.1. Pausing a custom metrics autoscaler You can pause the autoscaling of a scaled object by adding the autoscaling.keda.sh/paused-replicas annotation to the custom metrics autoscaler for that scaled object. The custom metrics autoscaler scales the replicas for that workload to the specified value and pauses autoscaling until the annotation is removed. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Add the autoscaling.keda.sh/paused-replicas annotation with any value: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling. 3.6.2. Restarting the custom metrics autoscaler for a scaled object You can restart a paused custom metrics autoscaler by removing the autoscaling.keda.sh/paused-replicas annotation for that ScaledObject . apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Remove the autoscaling.keda.sh/paused-replicas annotation. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Remove this annotation to restart a paused custom metrics autoscaler. 3.7. Gathering audit logs You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. For example, audit logs can help you understand where an autoscaling request is coming from. This is key information when backends are getting overloaded by autoscaling requests made by user applications and you need to determine which is the troublesome application. 3.7.1. Configuring audit logging You can configure auditing for the Custom Metrics Autoscaler Operator by editing the KedaController custom resource. The logs are sent to an audit log file on a volume that is secured by using a persistent volume claim in the KedaController CR. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Edit the KedaController custom resource to add the auditConfig stanza: kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: # ... metricsServer: # ... auditConfig: logFormat: "json" 1 logOutputVolumeClaim: "pvc-audit-log" 2 policy: rules: 3 - level: Metadata omitStages: "RequestReceived" 4 omitManagedFields: false 5 lifetime: 6 maxAge: "2" maxBackup: "1" maxSize: "50" 1 Specifies the output format of the audit log, either legacy or json . 2 Specifies an existing persistent volume claim for storing the log data. All requests coming to the API server are logged to this persistent volume claim. If you leave this field empty, the log data is sent to stdout. 3 Specifies which events should be recorded and what data they should include: None : Do not log events. Metadata : Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. This is the default. Request : Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. RequestResponse : Log event metadata, request text, and response text. This option does not apply for non-resource requests. 4 Specifies stages for which no event is created. 5 Specifies whether to omit the managed fields of the request and response bodies from being written to the API audit log, either true to omit the fields or false to include the fields. 6 Specifies the size and lifespan of the audit logs. maxAge : The maximum number of days to retain audit log files, based on the timestamp encoded in their filename. maxBackup : The maximum number of audit log files to retain. Set to 0 to retain all audit log files. maxSize : The maximum size in megabytes of an audit log file before it gets rotated. Verification View the audit log file directly: Obtain the name of the keda-metrics-apiserver-* pod: oc get pod -n openshift-keda Example output NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s View the log data by using a command similar to the following: USD oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: USD oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata Example output ... {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"4c81d41b-3dab-4675-90ce-20b87ce24013","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.131.0.1"],"userAgent":"kube-probe/1.26","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-02-16T13:00:03.554567Z","stageTimestamp":"2023-02-16T13:00:03.555032Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} ... Alternatively, you can view a specific log: Use a command similar to the following to log into the keda-metrics-apiserver-* pod: USD oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda For example: USD oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda Change to the /var/audit-policy/ directory: sh-4.4USD cd /var/audit-policy/ List the available logs: sh-4.4USD ls Example output log-2023.02.17-14:50 policy.yaml View the log, as needed: sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request Example output 3.8. Gathering debugging data When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. To help troubleshoot your issue, provide the following information: Data gathered using the must-gather tool. The unique cluster ID. You can use the must-gather tool to collect data about the Custom Metrics Autoscaler Operator and its components, including the following items: The openshift-keda namespace and its child objects. The Custom Metric Autoscaler Operator installation objects. The Custom Metric Autoscaler Operator CRD objects. 3.8.1. Gathering debugging data The following command runs the must-gather tool for the Custom Metrics Autoscaler Operator: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Note The standard OpenShift Container Platform must-gather command, oc adm must-gather , does not collect Custom Metrics Autoscaler Operator data. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default must-gather image as an image stream by running the following command. USD oc import-image is/must-gather -n openshift Perform one of the following: To get only the Custom Metrics Autoscaler Operator must-gather data, use the following command: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" The custom image for the must-gather command is pulled directly from the Operator package manifests, so that it works on any cluster where the Custom Metric Autoscaler Operator is available. To gather the default must-gather data in addition to the Custom Metric Autoscaler Operator information: Use the following command to obtain the Custom Metrics Autoscaler Operator image and set it as an environment variable: USD IMAGE="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Use the oc adm must-gather with the Custom Metrics Autoscaler Operator image: USD oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE} Example 3.1. Example must-gather output for the Custom Metric Autoscaler: └── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── .insecure.log │ │ └── .log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . 3.9. Viewing Operator metrics The Custom Metrics Autoscaler Operator exposes ready-to-use metrics that it pulls from the on-cluster monitoring component. You can query the metrics by using the Prometheus Query Language (PromQL) to analyze and diagnose issues. All metrics are reset when the controller pod restarts. 3.9.1. Accessing performance metrics You can access the metrics and run queries by using the OpenShift Container Platform web console. Procedure Select the Administrator perspective in the OpenShift Container Platform web console. Select Observe Metrics . To create a custom query, add your PromQL query to the Expression field. To add multiple queries, select Add Query . 3.9.1.1. Provided Operator metrics The Custom Metrics Autoscaler Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console. Table 3.1. Custom Metric Autoscaler Operator metrics Metric name Description keda_scaler_activity Whether the particular scaler is active or inactive. A value of 1 indicates the scaler is active; a value of 0 indicates the scaler is inactive. keda_scaler_metrics_value The current value for each scaler's metric, which is used by the Horizontal Pod Autoscaler (HPA) in computing the target average. keda_scaler_metrics_latency The latency of retrieving the current metric from each scaler. keda_scaler_errors The number of errors that have occurred for each scaler. keda_scaler_errors_total The total number of errors encountered for all scalers. keda_scaled_object_errors The number of errors that have occurred for each scaled obejct. keda_resource_totals The total number of Custom Metrics Autoscaler custom resources in each namespace for each custom resource type. keda_trigger_totals The total number of triggers by trigger type. Custom Metrics Autoscaler Admission webhook metrics The Custom Metrics Autoscaler Admission webhook also exposes the following Prometheus metrics. Metric name Description keda_scaled_object_validation_total The number of scaled object validations. keda_scaled_object_validation_errors The number of validation errors. 3.10. Understanding how to add custom metrics autoscalers To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job. You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload. 3.10.1. Adding a custom metrics autoscaler to a workload You can create a custom metrics autoscaler for a workload that is created by a Deployment , StatefulSet , or custom resource object. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you use a custom metrics autoscaler for scaling based on CPU or memory: Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage. USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> The pods associated with the object you want to scale must include specified memory and CPU limits. For example: Example pod spec apiVersion: v1 kind: Pod # ... spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: "128Mi" cpu: "500m" # ... Procedure Create a YAML file similar to the following. Only the name <2> , object name <4> , and object kind <5> are required: Example scaled object apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "0" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: "RequestReceived" omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication 1 Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the "Pausing the custom metrics autoscaler for a workload" section. 2 Specifies a name for this custom metrics autoscaler. 3 Optional: Specifies the API version of the target resource. The default is apps/v1 . 4 Specifies the name of the object that you want to scale. 5 Specifies the kind as Deployment , StatefulSet or CustomResource . 6 Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 7 Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0 . The default is 300 . 8 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 9 Optional: Specifies the minimum number of replicas when scaling down. 10 Optional: Specifies the parameters for audit logs. as described in the "Configuring audit logging" section. 11 Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation . 12 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 13 Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false , which keeps the replica count as it is when the scaled object is deleted. 14 Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name} . 15 Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the "Scaling policies" section. 16 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. This example uses OpenShift Container Platform monitoring. 17 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledobject <scaled_object_name> Example output NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. FALLBACK : Indicates whether the custom metrics autoscaler is able to get metrics from the source If False , the custom metrics autoscaler is getting metrics. If True , the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created. 3.10.2. Adding a custom metrics autoscaler to a job You can create a custom metrics autoscaler for any Job object. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Create a YAML file similar to the following: kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: "custom" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: "0.5" pendingPodConditions: - "Ready" - "PodScheduled" - "AnyOtherCustomPodCondition" multipleScalersCalculation : "max" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "bearer" authenticationRef: 14 name: prom-cluster-triggerauthentication 1 Specifies the maximum duration the job can run. 2 Specifies the number of retries for a job. The default is 6 . 3 Optional: Specifies how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, the default is 1 . 4 Optional: Specifies how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, the default is 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset the default is the value of the parallelism parameter. 5 Specifies the template for the pod the controller creates. 6 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 7 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 8 Optional: Specifies the number of successful finished jobs should be kept. The default is 100 . 9 Optional: Specifies how many failed jobs should be kept. The default is 100 . 10 Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 11 Optional: Specifies whether existing jobs are terminated whenever a scaled job is being updated: default : The autoscaler terminates an existing job if its associated scaled job is updated. The autoscaler recreates the job with the latest specs. gradual : The autoscaler does not terminate an existing job if its associated scaled job is updated. The autoscaler creates new jobs with the latest specs. 12 Optional: Specifies a scaling strategy: default , custom , or accurate . The default is default . For more information, see the link in the "Additional resources" section that follows. 13 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. 14 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledjob <scaled_job_name> Example output NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. 3.10.3. Additional resources Understanding custom metrics autoscaler trigger authentications 3.11. Removing the Custom Metrics Autoscaler Operator You can remove the custom metrics autoscaler from your OpenShift Container Platform cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues. Note Delete the KedaController custom resource (CR) first. If you do not delete the KedaController CR, OpenShift Container Platform can hang when you delete the openshift-keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR. 3.11.1. Uninstalling the Custom Metrics Autoscaler Operator Use the following procedure to remove the custom metrics autoscaler from your OpenShift Container Platform cluster. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-keda project. Remove the KedaController custom resource. Find the CustomMetricsAutoscaler Operator and click the KedaController tab. Find the custom resource, and then click Delete KedaController . Click Uninstall . Remove the Custom Metrics Autoscaler Operator: Click Operators Installed Operators . Find the CustomMetricsAutoscaler Operator and click the Options menu and select Uninstall Operator . Click Uninstall . Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components: Delete the custom metrics autoscaler CRDs: clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh USD oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted. List any custom metrics autoscaler cluster roles: USD oc get clusterrole | grep keda.sh Delete the listed custom metrics autoscaler cluster roles. For example: USD oc delete clusterrole.keda.sh-v1alpha1-admin List any custom metrics autoscaler cluster role bindings: USD oc get clusterrolebinding | grep keda.sh Delete the listed custom metrics autoscaler cluster role bindings. For example: USD oc delete clusterrolebinding.keda.sh-v1alpha1-admin Delete the custom metrics autoscaler project: USD oc delete project openshift-keda Delete the Cluster Metric Autoscaler Operator: USD oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda | [
"oc delete crd scaledobjects.keda.k8s.io",
"oc delete crd triggerauthentications.keda.k8s.io",
"oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem",
"oc get all -n openshift-keda",
"NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: false 9 unsafeSsl: false 10",
"oc project my-project",
"oc create serviceaccount <service_account>",
"oc describe serviceaccount <service_account>",
"Name: thanos Namespace: my-project Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token-9g4n5 1 Events: <none>",
"apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 1 - parameter: bearerToken 2 name: thanos-token-9g4n5 3 key: token 4 - parameter: ca name: thanos-token-9g4n5 key: ca.crt",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: thanos-metrics-reader 1 namespace: my-project 2 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 3 namespace: my-project 4",
"oc create -f <file-name>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7",
"apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3",
"apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD",
"oc create -f <filename>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2",
"oc apply -f <filename>",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"",
"get pod -n openshift-keda",
"NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s",
"oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1",
"oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.26\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda",
"oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda",
"sh-4.4USD cd /var/audit-policy/",
"sh-4.4USD ls",
"log-2023.02.17-14:50 policy.yaml",
"sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1",
"sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}",
"└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication",
"oc create -f <filename>.yaml",
"oc get scaledobject <scaled_object_name>",
"NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s",
"kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication",
"oc create -f <filename>.yaml",
"oc get scaledjob <scaled_job_name>",
"NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s",
"oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh",
"oc get clusterrole | grep keda.sh",
"oc delete clusterrole.keda.sh-v1alpha1-admin",
"oc get clusterrolebinding | grep keda.sh",
"oc delete clusterrolebinding.keda.sh-v1alpha1-admin",
"oc delete project openshift-keda",
"oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator |
7.20. boost | 7.20. boost 7.20.1. RHBA-2015:1269 - boost bug update Updated boost packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The boost packages contain a large number of free peer-reviewed portable C++ source libraries. These libraries are suitable for tasks such as portable file-systems and time/date abstraction, serialization, unit testing, thread creation and multi-process synchronization, parsing, graphing, regular expression manipulation, and many others. Bug Fixes BZ# 1169501 When compiling a C++ program using the Boost.MPI library, the compiling process previously failed to find the "boost::mpi::environment::environment(bool)" symbol and terminated with an "undefined reference" error. This update adds the missing symbol, and the described compiling process now successfully creates an executable. BZ# 1128313 Previously, the boost packages could use packages for different architectures as their dependencies, which in some cases led to a variety of problems with the functionality of the Boost clients. With this update, dependency declarations specify the architecture of the package where relevant, and all packages necessary for correct operation of the Boost clients are downloaded properly. BZ# 1167383 , BZ# 1170010 Prior to this update, a number of Boost libraries were not compatible with the GNU Compiler Collection (GCC) provided with Red Hat Developer Toolset. A fix has been implemented to address this problem, and the affected libraries now properly work with Red Hat Developer Toolset GCC. Users of Boost are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-boost |
Chapter 5. Configuration options | Chapter 5. Configuration options This chapter lists the available configuration options for AMQ Core Protocol JMS. JMS configuration options are set as query parameters on the connection URI. For more information, see Section 4.3, "Connection URIs" . 5.1. General options user The user name the client uses to authenticate the connection. password The password the client uses to authenticate the connection. clientID The client ID that the client applies to the connection. groupID The group ID that the client applies to all produced messages. autoGroup If enabled, generate a random group ID and apply it to all produced messages. preAcknowledge If enabled, acknowledge messages as soon as they are sent and before delivery is complete. This provides "at most once" delivery. It is disabled by default. blockOnDurableSend If enabled, when sending non-transacted durable messages, block until the remote peer acknowledges receipt. It is enabled by default. blockOnNonDurableSend If enabled, when sending non-transacted non-durable messages, block until the remote peer acknowledges receipt. It is disabled by default. blockOnAcknowledge If enabled, when acknowledging non-transacted received messages, block until the remote peer confirms acknowledgment. It is disabled by default. callTimeout The time in milliseconds to wait for a blocking call to complete. The default is 30000 (30 seconds). callFailoverTimeout When the client is in the process of failing over, the time in millisconds to wait before starting a blocking call. The default is 30000 (30 seconds). ackBatchSize The number of bytes a client can receive and acknowledge before the acknowledgement is sent to the broker. The default is 1048576 (1 MiB). dupsOKBatchSize When using the DUPS_OK_ACKNOWLEDGE acknowledgment mode, the size in bytes of acknowledgment batches. The default is 1048576 (1 MiB). transactionBatchSize When receiving messsages in a transaction, the size in bytes of acknowledgment batches. The default is 1048576 (1 MiB). cacheDestinations If enabled, cache destination lookups. It is disabled by default. 5.2. TCP options tcpNoDelay If enabled, do not delay and buffer TCP sends. It is enabled by default. tcpSendBufferSize The send buffer size in bytes. The default is 32768 (32 KiB). tcpReceiveBufferSize The receive buffer size in bytes. The default is 32768 (32 KiB). writeBufferLowWaterMark The limit in bytes below which the write buffer becomes writable. The default is 32768 (32 KiB). writeBufferHighWaterMark The limit in bytes above which the write buffer becomes non-writable. The default is 131072 (128 KiB). 5.3. SSL/TLS options sslEnabled If enabled, use SSL/TLS to authenticate and encrypt connections. It is disabled by default. keyStorePath The path to the SSL/TLS key store. A key store is required for mutual SSL/TLS authentication. If unset, the value of the javax.net.ssl.keyStore system property is used. keyStorePassword The password for the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStorePassword system property is used. trustStorePath The path to the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStore system property is used. trustStorePassword The password for the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStorePassword system property is used. trustAll If enabled, trust the provided server certificate implicitly, regardless of any configured trust store. It is disabled by default. verifyHost If enabled, verify that the connection hostname matches the provided server certificate. It is disabled by default. enabledCipherSuites A comma-separated list of cipher suites to enable. If unset, the JVM default ciphers are used. enabledProtocols A comma-separated list of SSL/TLS protocols to enable. If unset, the JVM default protocols are used. 5.4. Failover options initialConnectAttempts The number of reconnect attempts allowed before the first successful connection and before the client discovers the broker topology. The default is 0, meaning only one attempt is allowed. failoverOnInitialConnection If enabled, attempt to connect to the backup server if the initial connection fails. It is disabled by default. reconnnectAttempts The number of reconnect attempts allowed before reporting the connection as failed. The default is -1, meaning no limit. retryInterval The time in milliseconds between reconnect attempts. The default is 2000 (2 seconds). retryIntervalMultiplier The multiplier used to grow the retry interval. The default is 1.0, meaning equal intervals. maxRetryInterval The maximum time in milliseconds between reconnect attempts. The default is 2000 (2 seconds). ha If enabled, track changes in the topology of HA brokers. The host and port from the URI is used only for the initial connection. After initial connection, the client receives the current failover endpoints and any updates resulting from topology changes. It is disabled by default. connectionTTL The time in milliseconds after which the connection is failed if the server sends no ping packets. The default is 60000 (1 minute). -1 disables the timeout. confirmationWindowSize The size in bytes of the command replay buffer. This is used for automatic session re-attachment on reconnect. The default is -1, meaning no automatic re-attachment. clientFailureCheckPeriod The time in milliseconds between checks for dead connections. The default is 30000 (30 seconds). -1 disables checking. 5.5. Flow control options For more information, see Chapter 8, Flow control . consumerWindowSize The size in bytes of the per-consumer message prefetch buffer. The default is 1048576 (1 MiB). -1 means no limit. 0 disables prefetching. consumerMaxRate The maximum number of messages to consume per second. The default is -1, meaning no limit. producerWindowSize The requested size in bytes for credit to produce more messages. This limits the total amount of data in flight at one time. The default is 1048576 (1 MiB). -1 means no limit. producerMaxRate The maximum number of messages to produce per second. The default is -1, meaning no limit. 5.6. Load balancing options useTopologyForLoadBalancing If enabled, use the cluster topology for connection load balancing. It is enabled by default. connectionLoadBalancingPolicyClassName The class name of the connection load balancing policy. The default is org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy . 5.7. Large message options The client can enable large message support by setting a value for the property minLargeMessageSize . Any message larger than minLargeMessageSize is considered a large message. minLargeMessageSize The minimum size in bytes at which a message is treated as a large message. The default is 102400 (100 KiB). compressLargeMessages If enabled, compress large messages, as defined by minLargeMessageSize . It is disabled by default. Note If the compressed size of a large message is less than the value of minLargeMessageSize , the message is sent as a regular message. Therefore, it is not written to the broker's large-message data directory. 5.8. Threading options useGlobalPools If enabled, use one pool of threads for all ConnectionFactory instances. Otherwise, use a separate pool for each instance. It is enabled by default. threadPoolMaxSize The maximum number of threads in the general thread pool. The default is -1, meaning no limit. scheduledThreadPoolMaxSize The maximum number of threads in the thread pool for scheduled operations. The default is 5. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/configuration_options |
10.6. Starting Geo-replication on a Newly Added Brick, Node, or Volume | 10.6. Starting Geo-replication on a Newly Added Brick, Node, or Volume 10.6.1. Starting Geo-replication for a New Brick or New Node If a geo-replication session is running, and a new node is added to the trusted storage pool or a brick is added to the volume from a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node: Run the following command on the master node where key-based SSH authentication connection is configured, in order to create a common pem pub file. Create the geo-replication session using the following command. The push-pem and force options are required to perform the necessary pem-file setup on the slave nodes. For example: Note There must be key-based SSH authentication access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For more information on setting up shared storage volume, see Section 11.12, "Setting up Shared Storage Volume" . Configure the meta-volume for geo-replication: For example: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . If a node is added at slave, stop the geo-replication session using the following command: Start the geo-replication session between the slave and master forcefully, using the following command: Verify the status of the created session, using the following command: Warning The following scenarios can lead to a checksum mismatch: Adding bricks to expand a geo-replicated volume. Expanding the volume while the geo-replication synchronization is in progress. Newly added brick becomes `ACTIVE` to sync the data. Self healing on the new brick is not completed. 10.6.2. Starting Geo-replication for a New Brick on an Existing Node When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required. 10.6.3. Starting Geo-replication for a New Volume To create and start a geo-replication session between a new volume added to the master cluster and a new volume added to the slave cluster, you must perform the following steps: Prerequisites There must be key-based SSH authentication without a password access between the master volume node and the slave volume node. Create the geo-replication session using the following command: For example: Note This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. Configure the meta-volume for geo-replication: For example: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Start the geo-replication session between the slave and master, using the following command: Verify the status of the created session, using the following command: | [
"gluster system:: execute gsec_create",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL create push-pem force",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol create push-pem force",
"mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage cp /etc/fstab /var/run/gluster/fstab.tmp echo \"<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0\" >> /etc/fstab",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL start force",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL create",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol create",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL start",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Managing_Geo-replication-Starting_Geo-replication_on_a_Newly_Added_Brick |
3.8. Backing Up and Restoring a Cluster Configuration | 3.8. Backing Up and Restoring a Cluster Configuration As of the Red Hat Enterprise Linux 7.1 release, you can back up the cluster configuration in a tarball with the following command. If you do not specify a file name, the standard output will be used. Use the following command to restore the cluster configuration files on all cluster nodes from the backup. Specifying the --local option restores the cluster configuration files only on the node from which you run this command. If you do not specify a file name, the standard input will be used. | [
"pcs config backup filename",
"pcs config restore [--local] [ filename ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-pcsbackuprestore-HAAR |
Chapter 4. Physical and Virtual Memory | Chapter 4. Physical and Virtual Memory All present-day, general-purpose computers are of the type known as stored program computers . As the name implies, stored program computers load instructions (the building blocks of programs) into some type of internal storage, where they subsequently execute those instructions. Stored program computers also use the same storage for data. This is in contrast to computers that use their hardware configuration to control their operation (such as older plugboard-based computers). The place where programs were stored on the first stored program computers went by a variety of names and used a variety of different technologies, from spots on a cathode ray tube, to pressure pulses in columns of mercury. Fortunately, present-day computers use technologies with greater storage capacity and much smaller size than ever before. 4.1. Storage Access Patterns One thing to keep in mind throughout this chapter is that computers tend to access storage in certain ways. In fact, most storage access tends to exhibit one (or both) of the following attributes: Access tends to be sequential Access tends to be localized Sequential access means that, if address N is accessed by the CPU, it is highly likely that address N +1 will be accessed . This makes sense, as most programs consist of large sections of instructions that execute -- in order -- one after the other. Localized access means that, if address X is accessed, it is likely that other addresses surrounding X will also be accessed in the future. These attributes are crucial, because it allows smaller, faster storage to effectively buffer larger, slower storage. This is the basis for implementing virtual memory. But before we can discuss virtual memory, we must examine the various storage technologies currently in use. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/ch-memory |
Release notes | Release notes Red Hat Enterprise Linux AI 1.3 Red Hat Enterprise Linux AI release notes Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/release_notes/index |
Troubleshooting Collector | Troubleshooting Collector Red Hat Advanced Cluster Security for Kubernetes 4.6 Troubleshooting Collector Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/troubleshooting_collector/index |
Policy APIs | Policy APIs OpenShift Container Platform 4.13 Reference guide for policy APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/policy_apis/index |
3.4. Associative Arrays | 3.4. Associative Arrays SystemTap also supports the use of associative arrays. While an ordinary variable represents a single value, associative arrays can represent a collection of values. Simply put, an associative array is a collection of unique keys; each key in the array has a value associated with it. Since associative arrays are normally processed in multiple probes (as we will demonstrate later), they should be declared as global variables in the SystemTap script. The syntax for accessing an element in an associative array is similar to that of awk , and is as follows: array_name [ index_expression ] Here, the array_name is any arbitrary name the array uses. The index_expression is used to see a specific unique key in the array. To illustrate, let us try to build an array named arr that specifies the ages of three people (the unique keys): tom , dick , and harry . To assign them the ages (associated values) of 23, 24, and 25 respectively, we'd use the following array statements: Example 3.11. Basic Array Statements arr["tom"] = 23 arr["dick"] = 24 arr["harry"] = 25 You can specify up to nine index expressons in an array statement, each one delimited by a comma ( , ). This is useful if you wish to have a key that contains multiple pieces of information. The following line from Example 4.9, "disktop.stp" uses 5 elements for the key: process ID, executable name, user ID, parent process ID, and string "W". It associates the value of devname with that key. device[pid(),execname(),uid(),ppid(),"W"] = devname Important All associate arrays must be declared as global , regardless of whether the associate array is used in one or multiple probes. | [
"array_name [ index_expression ]",
"arr[\"tom\"] = 23 arr[\"dick\"] = 24 arr[\"harry\"] = 25",
"device[pid(),execname(),uid(),ppid(),\"W\"] = devname"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/associativearrays |
Chapter 5. Device Drivers | Chapter 5. Device Drivers This chapter provides a comprehensive listing of all device drivers that are new or have been updated in Red Hat Enterprise Linux 7.9. 5.1. New Drivers Graphics Drivers and Miscellaneous Drivers MC Driver for Intel 10nm server processors (i10nm_edac.ko.xz) 5.2. Updated Drivers Network Driver Updates The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 3.10.0-1150.el7.x86_64. VMware vmxnet3 virtual NIC driver (vmxnet3.ko.xz) has been updated to version 1.4.17.0-k. Storage Driver Updates QLogic FCoE Driver (bnx2fc.ko.xz) has been updated to version 2.12.13. Driver for HP Smart Array Controller (hpsa.ko.xz) has been updated to version 3.4.20-170-RH5. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.714.04.00-rh1. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.22.07.9-k. Driver for Microsemi Smart Family Controller version (smartpqi.ko.xz) has been updated to version 1.2.10-099. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.9_release_notes/device_drivers |
B.103. wireshark | B.103. wireshark B.103.1. RHSA-2010:0924 - Moderate: wireshark security update Updated wireshark packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Wireshark is a program for monitoring network traffic. Wireshark was previously known as Ethereal. CVE-2010-4300 A heap-based buffer overflow flaw was found in the Wireshark Local Download Sharing Service (LDSS) dissector. If Wireshark read a malformed packet off a network or opened a malicious dump file, it could crash or, possibly, execute arbitrary code as the user running Wireshark. CVE-2010-3445 A denial of service flaw was found in Wireshark. Wireshark could crash or stop responding if it read a malformed packet off a network, or opened a malicious dump file. Users of Wireshark should upgrade to these updated packages, which contain Wireshark version 1.2.13, and resolve these issues. All running instances of Wireshark must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/wireshark |
5.6.3. Removing a Member from a Failover Domain | 5.6.3. Removing a Member from a Failover Domain To remove a member from a failover domain, follow these steps: At the left frame of the the Cluster Configuration Tool , click the failover domain that you want to change (listed under Failover Domains ). At the bottom of the right frame (labeled Properties ), click the Edit Failover Domain Properties button. Clicking the Edit Failover Domain Properties button causes the Failover Domain Configuration dialog box to be displayed ( Figure 5.10, " Failover Domain Configuration : Configuring a Failover Domain" ). At the Failover Domain Configuration dialog box, in the Member Node column, click the node name that you want to delete from the failover domain and click the Remove Member from Domain button. Clicking Remove Member from Domain removes the node from the Member Node column. Repeat this step for each node that is to be deleted from the failover domain. (Nodes must be deleted one at a time.) When finished, click Close . At the Cluster Configuration Tool , perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running: New cluster - If this is a new cluster, choose File => Save to save the changes to the cluster configuration. Running cluster - If this cluster is operational and running, and you want to propagate the change immediately, click the Send to Cluster button. Clicking Send to Cluster automatically saves the configuration change. If you do not want to propagate the change immediately, choose File => Save to save the changes to the cluster configuration. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-config-remove-member-failoverdm-ca |
Chapter 4. Controlling pod placement onto nodes (scheduling) | Chapter 4. Controlling pod placement onto nodes (scheduling) 4.1. Controlling pod placement using the scheduler Pod scheduling is an internal process that determines placement of new pods onto nodes within the cluster. The scheduler code has a clean separation that watches new pods as they get created and identifies the most suitable node to host them. It then creates bindings (pod to node bindings) for the pods using the master API. Default pod scheduling OpenShift Container Platform comes with a default scheduler that serves the needs of most users. The default scheduler uses both inherent and customization tools to determine the best fit for a pod. Advanced pod scheduling In situations where you might want more control over where new pods are placed, the OpenShift Container Platform advanced scheduling features allow you to configure a pod so that the pod is required or has a preference to run on a particular node or alongside a specific pod. You can control pod placement by using the following scheduling features: Scheduler profiles Pod affinity and anti-affinity rules Node affinity Node selectors Taints and tolerations Node overcommitment 4.1.1. About the default scheduler The default OpenShift Container Platform pod scheduler is responsible for determining the placement of new pods onto nodes within the cluster. It reads data from the pod and finds a node that is a good fit based on configured profiles. It is completely independent and exists as a standalone solution. It does not modify the pod; it creates a binding for the pod that ties the pod to the particular node. 4.1.1.1. Understanding default scheduling The existing generic scheduler is the default platform-provided scheduler engine that selects a node to host the pod in a three-step operation: Filters the nodes The available nodes are filtered based on the constraints or requirements specified. This is done by running each node through the list of filter functions called predicates , or filters . Prioritizes the filtered list of nodes This is achieved by passing each node through a series of priority , or scoring , functions that assign it a score between 0 - 10, with 0 indicating a bad fit and 10 indicating a good fit to host the pod. The scheduler configuration can also take in a simple weight (positive numeric value) for each scoring function. The node score provided by each scoring function is multiplied by the weight (default weight for most scores is 1) and then combined by adding the scores for each node provided by all the scores. This weight attribute can be used by administrators to give higher importance to some scores. Selects the best fit node The nodes are sorted based on their scores and the node with the highest score is selected to host the pod. If multiple nodes have the same high score, then one of them is selected at random. 4.1.2. Scheduler use cases One of the important use cases for scheduling within OpenShift Container Platform is to support flexible affinity and anti-affinity policies. 4.1.2.1. Infrastructure topological levels Administrators can define multiple topological levels for their infrastructure (nodes) by specifying labels on nodes. For example: region=r1 , zone=z1 , rack=s1 . These label names have no particular meaning and administrators are free to name their infrastructure levels anything, such as city/building/room. Also, administrators can define any number of levels for their infrastructure topology, with three levels usually being adequate (such as: regions zones racks ). Administrators can specify affinity and anti-affinity rules at each of these levels in any combination. 4.1.2.2. Affinity Administrators should be able to configure the scheduler to specify affinity at any topological level, or even at multiple levels. Affinity at a particular level indicates that all pods that belong to the same service are scheduled onto nodes that belong to the same level. This handles any latency requirements of applications by allowing administrators to ensure that peer pods do not end up being too geographically separated. If no node is available within the same affinity group to host the pod, then the pod is not scheduled. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.1.2.3. Anti-affinity Administrators should be able to configure the scheduler to specify anti-affinity at any topological level, or even at multiple levels. Anti-affinity (or 'spread') at a particular level indicates that all pods that belong to the same service are spread across nodes that belong to that level. This ensures that the application is well spread for high availability purposes. The scheduler tries to balance the service pods across all applicable nodes as evenly as possible. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.2. Scheduling pods using a scheduler profile You can configure OpenShift Container Platform to use a scheduling profile to schedule pods onto nodes within the cluster. 4.2.1. About scheduler profiles You can specify a scheduler profile to control how pods are scheduled onto nodes. The following scheduler profiles are available: LowNodeUtilization This profile attempts to spread pods evenly across nodes to get low resource usage per node. This profile provides the default scheduler behavior. HighNodeUtilization This profile attempts to place as many pods as possible on to as few nodes as possible. This minimizes node count and has high resource usage per node. Note Switching to the HighNodeUtilization scheduler profile will result in all pods of a ReplicaSet object being scheduled on the same node. This will add an increased risk for pod failure if the node fails. NoScoring This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plugins. This might sacrifice better scheduling decisions for faster ones. 4.2.2. Configuring a scheduler profile You can configure the scheduler to use a scheduler profile. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the Scheduler object: USD oc edit scheduler cluster Specify the profile to use in the spec.profile field: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: mastersSchedulable: false profile: HighNodeUtilization 1 #... 1 Set to LowNodeUtilization , HighNodeUtilization , or NoScoring . Save the file to apply the changes. 4.3. Placing pods relative to other pods using affinity and anti-affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. 4.3.1. Understanding pod affinity Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods. Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod. Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod. For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodes, availability zones, or availability sets to reduce correlated failures. Note A label selector might match pods with multiple pod deployments. Use unique combinations of labels when configuring anti-affinity rules to avoid matching pods. There are two types of pod affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note Depending on your pod priority and preemption settings, the scheduler might not be able to find an appropriate node for a pod without violating affinity requirements. If so, a pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. You configure pod affinity/anti-affinity through the Pod spec files. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example shows a Pod spec configured for pod affinity and anti-affinity. In this example, the pod affinity rule indicates that the pod can schedule onto a node only if that node has at least one already-running pod with a label that has the key security and value S1 . The pod anti-affinity rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label having key security and value S2 . Sample Pod config file with pod affinity apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Stanza to configure pod affinity. 2 Defines a required rule. 3 5 The key and value (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Sample Pod config file with pod anti-affinity apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Note If labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod continues to run on the node. 4.3.2. Configuring a pod affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses affinity to allow scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters to add the affinity: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1-east # ... spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5 # ... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 5 Specify a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.3. Configuring a pod anti-affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses an anti-affinity preferred rule to attempt to prevent scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s2-east # ... spec: # ... affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6 # ... 1 Adds a pod anti-affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 For a preferred rule, specifies a weight for the node, 1-100. The node that with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to not be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 6 Specifies a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.4. Sample pod affinity and anti-affinity rules The following examples demonstrate pod affinity and pod anti-affinity. 4.3.4.1. Pod Affinity The following example demonstrates pod affinity for pods with matching labels and label selectors. The pod team4 has the label team:4 . apiVersion: v1 kind: Pod metadata: name: team4 labels: team: "4" # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod team4a has the label selector team:4 under podAffinity . apiVersion: v1 kind: Pod metadata: name: team4a # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - "4" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The team4a pod is scheduled on the same node as the team4 pod. 4.3.4.2. Pod Anti-affinity The following example demonstrates pod anti-affinity for pods with matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 has the label selector security:s1 under podAntiAffinity . apiVersion: v1 kind: Pod metadata: name: pod-s2 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 cannot be scheduled on the same node as pod-s1 . 4.3.4.3. Pod Affinity with no Matching Labels The following example demonstrates pod affinity for pods without matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 has the label selector security:s2 . apiVersion: v1 kind: Pod metadata: name: pod-s2 # ... spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... The pod pod-s2 is not scheduled unless there is a node with a pod that has the security:s2 label. If there is no other pod with that label, the new pod remains in a pending state: Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none> 4.3.5. Using pod affinity and anti-affinity to control where an Operator is installed By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes. The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes: If an Operator requires a particular platform, such as amd64 or arm64 If an Operator requires a particular operating system, such as Linux or Windows If you want Operators that work together scheduled on the same host or on hosts located on the same rack If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues You can control where an Operator pod is installed by adding a pod affinity or anti-affinity to the Operator's Subscription object. The following example shows how to use pod anti-affinity to prevent the installation the Custom Metrics Autoscaler Operator from any node that has pods with a specific label: Pod affinity example that places the Operator pod on one or more specific nodes apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #... 1 A pod affinity that places the Operator's pod on a node that has pods with the app=test label. Pod anti-affinity example that prevents the Operator pod from one or more specific nodes apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #... 1 A pod anti-affinity that prevents the Operator's pod from being scheduled on a node that has pods with the cpu=high label. Procedure To control the placement of an Operator pod, complete the following steps: Install the Operator as usual. If needed, ensure that your nodes are labeled to properly respond to the affinity. Edit the Operator Subscription object to add an affinity: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #... 1 Add a podAffinity or podAntiAffinity . Verification To ensure that the pod is deployed on the specific node, run the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none> 4.4. Controlling pod placement on nodes using node affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. In OpenShift Container Platform node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on the nodes and label selectors specified in pods. 4.4.1. Understanding node affinity Node affinity allows a pod to specify an affinity towards a group of nodes it can be placed on. The node does not have control over the placement. For example, you could configure a pod to only run on a node with a specific CPU or in a specific availability zone. There are two types of node affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note If labels on a node change at runtime that results in an node affinity rule on a pod no longer being met, the pod continues to run on the node. You configure node affinity through the Pod spec file. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example is a Pod spec with a rule that requires the pod be placed on a node with a label whose key is e2e-az-NorthSouth and whose value is either e2e-az-North or e2e-az-South : Example pod configuration file with a node affinity required rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... 1 The stanza to configure node affinity. 2 Defines a required rule. 3 5 6 The key/value pair (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . The following example is a node specification with a preferred rule that a node with a label whose key is e2e-az-EastWest and whose value is either e2e-az-East or e2e-az-West is preferred for the pod: Example pod configuration file with a node affinity preferred rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] # ... 1 The stanza to configure node affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with highest weight is preferred. 4 6 7 The key/value pair (label) that must be matched to apply the rule. 5 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . There is no explicit node anti-affinity concept, but using the NotIn or DoesNotExist operator replicates that behavior. Note If you are using node affinity and node selectors in the same pod configuration, note the following: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. 4.4.2. Configuring a required node affinity rule Required rules must be met before a pod can be scheduled on a node. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler is required to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az1 Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #... Create a pod with a specific label in the pod spec: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. Example output apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod: USD oc create -f <file-name>.yaml 4.4.3. Configuring a preferred node affinity rule Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler tries to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az3 Create a pod with a specific label: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #... 1 Adds a pod affinity. 2 Configures the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies a weight for the node, as a number 1-100. The node with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod. USD oc create -f <file-name>.yaml 4.4.4. Sample node affinity rules The following examples demonstrate node affinity. 4.4.4.1. Node affinity with matching labels The following example demonstrates node affinity for a node and pod with matching labels: The Node1 node has the label zone:us : USD oc label node node1 zone=us Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod can be scheduled on Node1: USD oc get pod -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1 4.4.4.2. Node affinity with no matching labels The following example demonstrates node affinity for a node and pod without matching labels: The Node1 node has the label zone:emea : USD oc label node node1 zone=emea Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod cannot be scheduled on Node1: USD oc describe pod pod-s1 Example output ... Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1). 4.4.5. Using node affinity to control where an Operator is installed By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes. The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes: If an Operator requires a particular platform, such as amd64 or arm64 If an Operator requires a particular operating system, such as Linux or Windows If you want Operators that work together scheduled on the same host or on hosts located on the same rack If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues You can control where an Operator pod is installed by adding a node affinity constraints to the Operator's Subscription object. The following examples show how to use node affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster: Node affinity example that places the Operator pod on a specific node apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #... 1 A node affinity that requires the Operator's pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal . Node affinity example that places the Operator pod on a node with a specific platform apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #... 1 A node affinity that requires the Operator's pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels. Procedure To control the placement of an Operator pod, complete the following steps: Install the Operator as usual. If needed, ensure that your nodes are labeled to properly respond to the affinity. Edit the Operator Subscription object to add an affinity: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #... 1 Add a nodeAffinity . Verification To ensure that the pod is deployed on the specific node, run the following command: USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none> 4.4.6. Additional resources Understanding how to update labels on nodes 4.5. Placing pods onto overcommited nodes In an overcommited state, the sum of the container compute resource requests and limits exceeds the resources available on the system. Overcommitment might be desirable in development environments where a trade-off of guaranteed performance for capacity is acceptable. Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. 4.5.1. Understanding overcommitment Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes by configuring masters to override the ratio between request and limit set on developer containers. In conjunction with a per-project LimitRange object specifying limits and defaults, this adjusts the container limit and request to achieve the desired level of overcommit. Note That these overrides have no effect if no limits have been set on containers. Create a LimitRange object with default limits, per individual project, or in the project template, to ensure that the overrides apply. After these overrides, the container limits and requests must still be validated by any LimitRange object in the project. It is possible, for example, for developers to specify a limit close to the minimum limit, and have the request then be overridden below the minimum limit, causing the pod to be forbidden. This unfortunate user experience should be addressed with future work, but for now, configure this capability and LimitRange objects with caution. 4.5.2. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 4.6. Controlling pod placement using node taints Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. 4.6.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 4.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 4.6.1.1. Understanding how to use toleration seconds to delay pod evictions You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds parameter in the Pod specification or MachineSet object. If a taint with the NoExecute effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds parameter, the pod is not evicted until that time period expires. Example output apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. 4.6.1.2. Understanding how to use multiple taints You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows: Process the taints for which the pod has a matching toleration. The remaining unmatched taints have the indicated effects on the pod: If there is at least one unmatched taint with effect NoSchedule , OpenShift Container Platform cannot schedule a pod onto that node. If there is no unmatched taint with effect NoSchedule but there is at least one unmatched taint with effect PreferNoSchedule , OpenShift Container Platform tries to not schedule the pod onto the node. If there is at least one unmatched taint with effect NoExecute , OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node. Pods that do not tolerate the taint are evicted immediately. Pods that tolerate the taint without specifying tolerationSeconds in their Pod specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. For example: Add the following taints to the node: USD oc adm taint nodes node1 key1=value1:NoSchedule USD oc adm taint nodes node1 key1=value1:NoExecute USD oc adm taint nodes node1 key2=value2:NoSchedule The pod has the following tolerations: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" #... In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. 4.6.1.3. Understanding pod scheduling and node conditions (taint node by condition) The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons: node.kubernetes.io/memory-pressure node.kubernetes.io/disk-pressure node.kubernetes.io/unschedulable (1.10 or later) node.kubernetes.io/network-unavailable (host network only) You can also add arbitrary tolerations to daemon sets. Note The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class. This is because Kubernetes manages pods in the Guaranteed or Burstable QoS classes. The new BestEffort pods do not get scheduled onto the affected node. 4.6.1.4. Understanding evicting pods by condition (taint-based evictions) The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready and unreachable . When a node experiences one of these conditions, OpenShift Container Platform automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. Taint Based Evictions have a NoExecute effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds parameter. The tolerationSeconds parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds period, pods with matching tolerations are not removed. If you use the tolerationSeconds parameter with no value, pods are never evicted because of the not ready and unreachable node conditions. Note OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone's state to PartialDisruption and the rate of pod evictions is reduced. For small clusters (by default, 50 nodes or less) in this state, nodes in this zone are not tainted and evictions are stopped. For more information, see Rate limits on eviction in the Kubernetes documentation. OpenShift Container Platform automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , unless the Pod configuration specifies either toleration. apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #... 1 These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. Pods spawned by a daemon set are created with NoExecute tolerations for the following taints with no tolerationSeconds : node.kubernetes.io/unreachable node.kubernetes.io/not-ready As a result, daemon set pods are never evicted because of these node conditions. 4.6.1.5. Tolerating all taints You can configure a pod to tolerate all taints by adding an operator: "Exists" toleration with no key and values parameters. Pods with this toleration are not removed from a node that has taints. Pod spec for tolerating all taints apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - operator: "Exists" #... 4.6.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 #... 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 4.6.2.1. Adding taints and tolerations using a compute machine set You can add taints to nodes using a compute machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a compute machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a compute machine set specification apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset #... spec: #... template: #... spec: taints: - effect: NoExecute key: key1 value: value1 #... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the compute machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the compute machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 4.6.2.2. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my-node #... spec: taints: - key: dedicated value: groupName effect: NoSchedule #... Add a toleration to the pods by writing a custom admission controller. 4.6.2.3. Creating a project with a node selector and toleration You can create a project that uses a node selector and toleration, which are set as annotations, to control the placement of pods onto specific nodes. Any subsequent resources created in the project are then scheduled on nodes that have a taint matching the toleration. Prerequisites A label for node selection has been added to one or more nodes by using a compute machine set or editing the node directly. A taint has been added to one or more nodes by using a compute machine set or editing the node directly. Procedure Create a Project resource definition, specifying a node selector and toleration in the metadata.annotations section: Example project.yaml file kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "<key_name>"} 3 ] 1 The project name. 2 The default node selector label. 3 The toleration parameters, as described in the Taint and toleration components table. This example uses the NoSchedule effect, which allows existing pods on the node to remain, and the Exists operator, which does not take a value. Use the oc apply command to create the project: USD oc apply -f project.yaml Any subsequent resources created in the <project_name> namespace should now be scheduled on the specified nodes. Additional resources Adding taints and tolerations manually to nodes or with compute machine sets Creating project-wide node selectors Pod placement of Operator workloads 4.6.2.4. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 #... Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my_node #... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #... 4.6.3. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... 4.7. Placing pods on specific nodes using node selectors A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 4.7.1. About node selectors You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes. For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east , us-central , or us-west . In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west . The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes. A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label. Important If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. Node selectors on specific pods and nodes You can control which node a specific pod is scheduled on by using node selectors and labels. To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod. Note You cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config. For example, the following Node object has the region: east label: Sample Node object with a label kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #... 1 Labels to match the pod node selector. A pod has the type: user-node,region: east node selector: Sample Pod object with node selectors apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: 1 region: east type: user-node #... 1 Node selectors to match the node label. The node must have a label for each node selector. When you create the pod using the example pod spec, it can be scheduled on the example node. Default cluster-wide node selectors With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. For example, the following Scheduler object has the default cluster-wide region=east and type=user-node node selectors: Example Scheduler Operator Custom Resource apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: defaultNodeSelector: type=user-node,region=east #... A node in that cluster has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: region: east #... When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node: Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> Note If the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector. Project node selectors With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference. For example, the following project has the region=east node selector: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: "region=east" #... The following node has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node: Example Pod object apiVersion: v1 kind: Pod metadata: namespace: east-region #... spec: nodeSelector: region: east type: user-node #... Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not be created: Example Pod object with an invalid node selector apiVersion: v1 kind: Pod metadata: name: west-region #... spec: nodeSelector: region: west #... 4.7.2. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" # ... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" # ... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.28.5 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. 4.7.3. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a compute machine set or editing the node directly: Use a compute machine set to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that compute machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5 4.7.4. Creating project-wide node selectors You can use node selectors in a project together with labels on nodes to constrain all pods created in that project to the labeled nodes. When you create a pod in this project, OpenShift Container Platform adds the node selectors to the pods in the project and schedules the pods on a node with matching labels in the project. If there is a cluster-wide default node selector, a project node selector takes preference. You add node selectors to a project by editing the Namespace object to add the openshift.io/node-selector parameter. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. A pod is not scheduled if the Pod object contains a node selector, but no project has a matching node selector. When you create a pod from that spec, you receive an error similar to the following message: Example error message Error from server (Forbidden): error when creating "pod.yaml": pods "pod-4" is forbidden: pod node label selector conflicts with its project node label selector Note You can add additional key/value pairs to a pod. But you cannot add a different value for a project key. Procedure To add a default project node selector: Create a namespace or edit an existing namespace to add the openshift.io/node-selector parameter: USD oc edit namespace <name> Example output apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "type=user-node,region=east" 1 openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: "2021-05-10T12:35:04Z" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: "145537" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes 1 Add the openshift.io/node-selector with the appropriate <key>:<value> pairs. Add labels to a node by using a compute machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the compute machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node Redeploy the nodes associated with that compute machine set: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5 Add labels directly to a node: Edit the Node object to add labels: USD oc label <resource> <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the Node object using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5 Additional resources Creating a project with a node selector and toleration 4.8. Controlling pod placement by using pod topology spread constraints You can use pod topology spread constraints to provide fine-grained control over the placement of your pods across nodes, zones, regions, or other user-defined topology domains. Distributing pods across failure domains can help to achieve high availability and more efficient resource utilization. 4.8.1. Example use cases As an administrator, I want my workload to automatically scale between two to fifteen pods. I want to ensure that when there are only two pods, they are not placed on the same node, to avoid a single point of failure. As an administrator, I want to distribute my pods evenly across multiple infrastructure zones to reduce latency and network costs. I want to ensure that my cluster can self-heal if issues arise. 4.8.2. Important considerations Pods in an OpenShift Container Platform cluster are managed by workload controllers such as deployments, stateful sets, or daemon sets. These controllers define the desired state for a group of pods, including how they are distributed and scaled across the nodes in the cluster. You should set the same pod topology spread constraints on all pods in a group to avoid confusion. When using a workload controller, such as a deployment, the pod template typically handles this for you. Mixing different pod topology spread constraints can make OpenShift Container Platform behavior confusing and troubleshooting more difficult. You can avoid this by ensuring that all nodes in a topology domain are consistently labeled. OpenShift Container Platform automatically populates well-known labels, such as kubernetes.io/hostname . This helps avoid the need for manual labeling of nodes. These labels provide essential topology information, ensuring consistent node labeling across the cluster. Only pods within the same namespace are matched and grouped together when spreading due to a constraint. You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed. 4.8.3. Understanding skew and maxSkew Skew refers to the difference in the number of pods that match a specified label selector across different topology domains, such as zones or nodes. The skew is calculated for each domain by taking the absolute difference between the number of pods in that domain and the number of pods in the domain with the lowest amount of pods scheduled. Setting a maxSkew value guides the scheduler to maintain a balanced pod distribution. 4.8.3.1. Example skew calculation You have three zones (A, B, and C), and you want to distribute your pods evenly across these zones. If zone A has 5 pods, zone B has 3 pods, and zone C has 2 pods, to find the skew, you can subtract the number of pods in the domain with the lowest amount of pods scheduled from the number of pods currently in each zone. This means that the skew for zone A is 3, the skew for zone B is 1, and the skew for zone C is 0. 4.8.3.2. The maxSkew parameter The maxSkew parameter defines the maximum allowable difference, or skew, in the number of pods between any two topology domains. If maxSkew is set to 1 , the number of pods in any topology domain should not differ by more than 1 from any other domain. If the skew exceeds maxSkew , the scheduler attempts to place new pods in a way that reduces the skew, adhering to the constraints. Using the example skew calculation, the skew values exceed the default maxSkew value of 1 . The scheduler places new pods in zone B and zone C to reduce the skew and achieve a more balanced distribution, ensuring that no topology domain exceeds the skew of 1. 4.8.4. Example configurations for pod topology spread constraints You can specify which pods to group together, which topology domains they are spread among, and the acceptable skew. The following examples demonstrate pod topology spread constraint configurations. Example to distribute pods that match the specified labels based on their zone apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 The maximum difference in number of pods between any two topology domains. The default is 1 , and you cannot specify a value of 0 . 2 The key of a node label. Nodes with this key and identical value are considered to be in the same topology. 3 How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule , which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced. 4 Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched. 5 Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future. 6 A list of pod label keys to select which pods to calculate spreading over. Example demonstrating a single pod topology spread constraint kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] The example defines a Pod spec with a one pod topology spread constraint. It matches on pods labeled region: us-east , distributes among zones, specifies a skew of 1 , and does not schedule the pod if it does not meet these requirements. Example demonstrating multiple pod topology spread constraints kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] The example defines a Pod spec with two pod topology spread constraints. Both match on pods labeled region: us-east , specify a skew of 1 , and do not schedule the pod if it does not meet these requirements. The first constraint distributes pods based on a user-defined label node , and the second constraint distributes pods based on a user-defined label rack . Both constraints must be met for the pod to be scheduled. 4.8.5. Additional resources Understanding how to update labels on nodes 4.9. Descheduler 4.9.1. Descheduler overview While the scheduler is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node. 4.9.1.1. About the descheduler You can use the descheduler to evict pods based on specific strategies so that the pods can be rescheduled onto more appropriate nodes. You can benefit from descheduling running pods in situations such as the following: Nodes are underutilized or overutilized. Pod and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes. Node failure requires pods to be moved. New nodes are added to clusters. Pods have been restarted too many times. Important The descheduler does not schedule replacement of evicted pods. The scheduler automatically performs this task for the evicted pods. When the descheduler decides to evict pods from a node, it employs the following general mechanism: Pods in the openshift-* and kube-system namespaces are never evicted. Critical pods with priorityClassName set to system-cluster-critical or system-node-critical are never evicted. Static, mirrored, or stand-alone pods that are not part of a replication controller, replica set, deployment, or job are never evicted because these pods will not be recreated. Pods associated with daemon sets are never evicted. Pods with local storage are never evicted. Best effort pods are evicted before burstable and guaranteed pods. All types of pods with the descheduler.alpha.kubernetes.io/evict annotation are eligible for eviction. This annotation is used to override checks that prevent eviction, and the user can select which pod is evicted. Users should know how and if the pod will be recreated. Pods subject to pod disruption budget (PDB) are not evicted if descheduling violates its pod disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB. 4.9.1.2. Descheduler profiles The following descheduler profiles are available: AffinityAndTaints This profile evicts pods that violate inter-pod anti-affinity, node affinity, and node taints. It enables the following strategies: RemovePodsViolatingInterPodAntiAffinity : removes pods that are violating inter-pod anti-affinity. RemovePodsViolatingNodeAffinity : removes pods that are violating node affinity. RemovePodsViolatingNodeTaints : removes pods that are violating NoSchedule taints on nodes. Pods with a node affinity type of requiredDuringSchedulingIgnoredDuringExecution are removed. TopologyAndDuplicates This profile evicts pods in an effort to evenly spread similar pods, or pods of the same topology domain, among nodes. It enables the following strategies: RemovePodsViolatingTopologySpreadConstraint : finds unbalanced topology domains and tries to evict pods from larger ones when DoNotSchedule constraints are violated. RemoveDuplicates : ensures that there is only one pod associated with a replica set, replication controller, deployment, or job running on same node. If there are more, those duplicate pods are evicted for better pod distribution in a cluster. LifecycleAndUtilization This profile evicts long-running pods and balances resource usage between nodes. It enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times. Pods where the sum of restarts over all containers (including Init Containers) is more than 100. LowNodeUtilization : finds nodes that are underutilized and evicts pods, if possible, from overutilized nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). PodLifeTime : evicts pods that are too old. By default, pods that are older than 24 hours are removed. You can customize the pod lifetime value. SoftTopologyAndDuplicates This profile is the same as TopologyAndDuplicates , except that pods with soft topology constraints, such as whenUnsatisfiable: ScheduleAnyway , are also considered for eviction. Note Do not enable both SoftTopologyAndDuplicates and TopologyAndDuplicates . Enabling both results in a conflict. EvictPodsWithLocalStorage This profile allows pods with local storage to be eligible for eviction. EvictPodsWithPVC This profile allows pods with persistent volume claims to be eligible for eviction. If you are using Kubernetes NFS Subdir External Provisioner , you must add an excluded namespace for the namespace where the provisioner is installed. 4.9.2. Kube Descheduler Operator release notes The Kube Descheduler Operator allows you to evict pods so that they can be rescheduled on more appropriate nodes. These release notes track the development of the Kube Descheduler Operator. For more information, see About the descheduler . 4.9.2.1. Release notes for Kube Descheduler Operator 5.0.2 Issued: 2 December 2024 The following advisory is available for the Kube Descheduler Operator 5.0.2: RHSA-2024:8704 4.9.2.1.1. Bug fixes This release of the Kube Descheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.9.2.2. Release notes for Kube Descheduler Operator 5.0.1 Issued: 1 July 2024 The following advisory is available for the Kube Descheduler Operator 5.0.1: RHSA-2024:3617 4.9.2.2.1. New features and enhancements You can now install and use the Kube Descheduler Operator in an OpenShift Container Platform cluster running in FIPS mode. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. This release of the Kube Descheduler Operator updates the Kubernetes version to 1.29. 4.9.2.2.2. Bug fixes This release of the Kube Descheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.9.2.3. Release notes for Kube Descheduler Operator 5.0.0 Issued: 6 March 2024 The following advisory is available for the Kube Descheduler Operator 5.0.0: RHSA-2024:0302 4.9.2.3.1. Notable changes With this release, the Kube Descheduler Operator delivers updates independent of the OpenShift Container Platform minor version release stream. 4.9.2.3.2. Bug fixes Previously, the descheduler pod logs showed the following warning about the Operator's version: failed to convert Descheduler minor version to float . With this update, the warning is no longer shown. ( OCPBUGS-14042 ) 4.9.3. Evicting pods using the descheduler You can run the descheduler in OpenShift Container Platform by installing the Kube Descheduler Operator and setting the desired profiles and other customizations. 4.9.3.1. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . 4.9.3.2. Configuring descheduler profiles You can configure which profiles the descheduler uses to evict pods. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Specify one or more profiles in the spec.profiles section. apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC 1 Optional: By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . 2 Optional: Set a list of user-created namespaces to include or exclude from descheduler operations. Use excluded to set a list of namespaces to exclude or use included to set a list of namespaces to include. Note that protected namespaces ( openshift-* , kube-system , hypershift ) are excluded by default. 3 Optional: Enable a custom pod lifetime value for the LifecycleAndUtilization profile. Valid units are s , m , or h . The default pod lifetime is 24 hours. 4 Optional: Specify a priority threshold to consider pods for eviction only if their priority is lower than the specified level. Use the thresholdPriority field to set a numerical priority threshold (for example, 10000 ) or use the thresholdPriorityClassName field to specify a certain priority class name (for example, my-priority-class-name ). If you specify a priority class name, it must already exist or the descheduler will throw an error. Do not set both thresholdPriority and thresholdPriorityClassName . 5 Add one or more profiles to enable. Available profiles: AffinityAndTaints , TopologyAndDuplicates , LifecycleAndUtilization , SoftTopologyAndDuplicates , EvictPodsWithLocalStorage , and EvictPodsWithPVC . 6 Do not enable both TopologyAndDuplicates and SoftTopologyAndDuplicates . Enabling both results in a conflict. You can enable multiple profiles; the order that the profiles are specified in is not important. Save the file to apply the changes. 4.9.3.3. Configuring the descheduler interval You can configure the amount of time between descheduler runs. The default is 3600 seconds (one hour). Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Update the deschedulingIntervalSeconds field to the desired value: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1 ... 1 Set the number of seconds between descheduler runs. A value of 0 in this field runs the descheduler once and exits. Save the file to apply the changes. 4.9.4. Uninstalling the Kube Descheduler Operator You can remove the Kube Descheduler Operator from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 4.9.4.1. Uninstalling the descheduler You can remove the descheduler from your cluster by removing the descheduler instance and uninstalling the Kube Descheduler Operator. This procedure also cleans up the KubeDescheduler CRD and openshift-kube-descheduler-operator namespace. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Delete the descheduler instance. From the Operators Installed Operators page, click Kube Descheduler Operator . Select the Kube Descheduler tab. Click the Options menu to the cluster entry and select Delete KubeDescheduler . In the confirmation dialog, click Delete . Uninstall the Kube Descheduler Operator. Navigate to Operators Installed Operators . Click the Options menu to the Kube Descheduler Operator entry and select Uninstall Operator . In the confirmation dialog, click Uninstall . Delete the openshift-kube-descheduler-operator namespace. Navigate to Administration Namespaces . Enter openshift-kube-descheduler-operator into the filter box. Click the Options menu to the openshift-kube-descheduler-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-kube-descheduler-operator and click Delete . Delete the KubeDescheduler CRD. Navigate to Administration Custom Resource Definitions . Enter KubeDescheduler into the filter box. Click the Options menu to the KubeDescheduler entry and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . 4.10. Secondary scheduler 4.10.1. Secondary scheduler overview You can install the Secondary Scheduler Operator to run a custom secondary scheduler alongside the default scheduler to schedule pods. 4.10.1.1. About the Secondary Scheduler Operator The Secondary Scheduler Operator for Red Hat OpenShift provides a way to deploy a custom secondary scheduler in OpenShift Container Platform. The secondary scheduler runs alongside the default scheduler to schedule pods. Pod configurations can specify which scheduler to use. The custom scheduler must have the /bin/kube-scheduler binary and be based on the Kubernetes scheduling framework . Important You can use the Secondary Scheduler Operator to deploy a custom secondary scheduler in OpenShift Container Platform, but Red Hat does not directly support the functionality of the custom secondary scheduler. The Secondary Scheduler Operator creates the default roles and role bindings required by the secondary scheduler. You can specify which scheduling plugins to enable or disable by configuring the KubeSchedulerConfiguration resource for the secondary scheduler. 4.10.2. Secondary Scheduler Operator for Red Hat OpenShift release notes The Secondary Scheduler Operator for Red Hat OpenShift allows you to deploy a custom secondary scheduler in your OpenShift Container Platform cluster. These release notes track the development of the Secondary Scheduler Operator for Red Hat OpenShift. For more information, see About the Secondary Scheduler Operator . 4.10.2.1. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.2.2 Issued: 18 November 2024 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.2.2: RHSA-2024:8219 4.10.2.1.1. Bug fixes This release of the Secondary Scheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.10.2.1.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.2.2. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.2.1 Issued: 6 March 2024 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.2.1: RHSA-2024:0281 4.10.2.2.1. New features and enhancements Resource limits removed to support large clusters With this release, resource limits were removed to allow you to use the Secondary Scheduler Operator for large clusters with many nodes and pods without failing due to out-of-memory errors. 4.10.2.2.2. Bug fixes This release of the Secondary Scheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.10.2.2.3. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.2.3. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.2.0 Issued: 1 November 2023 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.2.0: RHSA-2023:6154 4.10.2.3.1. Bug fixes This release of the Secondary Scheduler Operator addresses several Common Vulnerabilities and Exposures (CVEs). 4.10.2.3.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( WRKLDS-645 ) 4.10.3. Scheduling pods using a secondary scheduler You can run a custom secondary scheduler in OpenShift Container Platform by installing the Secondary Scheduler Operator, deploying the secondary scheduler, and setting the secondary scheduler in the pod definition. 4.10.3.1. Installing the Secondary Scheduler Operator You can use the web console to install the Secondary Scheduler Operator for Red Hat OpenShift. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-secondary-scheduler-operator in the Name field and click Create . Install the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Operators OperatorHub . Enter Secondary Scheduler Operator for Red Hat OpenShift into the filter box. Select the Secondary Scheduler Operator for Red Hat OpenShift and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Secondary Scheduler Operator for Red Hat OpenShift. Select A specific namespace on the cluster and select openshift-secondary-scheduler-operator from the drop-down menu. Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Navigate to Operators Installed Operators . Verify that Secondary Scheduler Operator for Red Hat OpenShift is listed with a Status of Succeeded . 4.10.3.2. Deploying a secondary scheduler After you have installed the Secondary Scheduler Operator, you can deploy a secondary scheduler. Prerequisities You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Create config map to hold the configuration for the secondary scheduler. Navigate to Workloads ConfigMaps . Click Create ConfigMap . In the YAML editor, enter the config map definition that contains the necessary KubeSchedulerConfiguration configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: "secondary-scheduler-config" 1 namespace: "openshift-secondary-scheduler-operator" 2 data: "config.yaml": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated 1 The name of the config map. This is used in the Scheduler Config field when creating the SecondaryScheduler CR. 2 The config map must be created in the openshift-secondary-scheduler-operator namespace. 3 The KubeSchedulerConfiguration resource for the secondary scheduler. For more information, see KubeSchedulerConfiguration in the Kubernetes API documentation. 4 The name of the secondary scheduler. Pods that set their spec.schedulerName field to this value are scheduled with this secondary scheduler. 5 The plugins to enable or disable for the secondary scheduler. For a list default scheduling plugins, see Scheduling plugins in the Kubernetes documentation. Click Create . Create the SecondaryScheduler CR: Navigate to Operators Installed Operators . Select Secondary Scheduler Operator for Red Hat OpenShift . Select the Secondary Scheduler tab and click Create SecondaryScheduler . The Name field defaults to cluster ; do not change this name. The Scheduler Config field defaults to secondary-scheduler-config . Ensure that this value matches the name of the config map created earlier in this procedure. In the Scheduler Image field, enter the image name for your custom scheduler. Important Red Hat does not directly support the functionality of your custom secondary scheduler. Click Create . 4.10.3.3. Scheduling a pod using the secondary scheduler To schedule a pod using the secondary scheduler, set the schedulerName field in the pod definition. Prerequisities You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. A secondary scheduler is configured. Procedure Log in to the OpenShift Container Platform web console. Navigate to Workloads Pods . Click Create Pod . In the YAML editor, enter the desired pod configuration and add the schedulerName field: apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1 1 The schedulerName field must match the name that is defined in the config map when you configured the secondary scheduler. Click Create . Verification Log in to the OpenShift CLI. Describe the pod using the following command: USD oc describe pod nginx -n default Example output Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp ... In the events table, find the event with a message similar to Successfully assigned <namespace>/<pod_name> to <node_name> . In the "From" column, verify that the event was generated from the secondary scheduler and not the default scheduler. Note You can also check the secondary-scheduler-* pod logs in the openshift-secondary-scheduler-namespace to verify that the pod was scheduled by the secondary scheduler. 4.10.4. Uninstalling the Secondary Scheduler Operator You can remove the Secondary Scheduler Operator for Red Hat OpenShift from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 4.10.4.1. Uninstalling the Secondary Scheduler Operator You can uninstall the Secondary Scheduler Operator for Red Hat OpenShift by using the web console. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the Secondary Scheduler Operator for Red Hat OpenShift Operator. Navigate to Operators Installed Operators . Click the Options menu to the Secondary Scheduler Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 4.10.4.2. Removing Secondary Scheduler Operator resources Optionally, after uninstalling the Secondary Scheduler Operator for Red Hat OpenShift, you can remove its related resources from your cluster. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were installed by the Secondary Scheduler Operator: Navigate to Administration CustomResourceDefinitions . Enter SecondaryScheduler in the Name field to filter the CRDs. Click the Options menu to the SecondaryScheduler CRD and select Delete Custom Resource Definition : Remove the openshift-secondary-scheduler-operator namespace. Navigate to Administration Namespaces . Click the Options menu to the openshift-secondary-scheduler-operator and select Delete Namespace . In the confirmation dialog, enter openshift-secondary-scheduler-operator in the field and click Delete . | [
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #",
"apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc label node node1 e2e-az-name=e2e-az1",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #",
"oc create -f <file-name>.yaml",
"oc label node node1 e2e-az-name=e2e-az3",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #",
"oc create -f <file-name>.yaml",
"oc label node node1 zone=us",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1",
"oc label node node1 zone=emea",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc describe pod pod-s1",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]",
"oc apply -f project.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.28.5",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5",
"Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector",
"oc edit namespace <name>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.28.5",
"oc label <resource> <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.28.5",
"apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1",
"apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated",
"apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1",
"oc describe pod nginx -n default",
"Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/nodes/controlling-pod-placement-onto-nodes-scheduling |
B.5. Glock Holders | B.5. Glock Holders Table B.5, "Glock holder flags" shows the meanings of the different glock holder flags. Table B.5. Glock holder flags Flag Name Meaning a Async Do not wait for glock result (will poll for result later) A Any Any compatible lock mode is acceptable c No cache When unlocked, demote DLM lock immediately e No expire Ignore subsequent lock cancel requests E Exact Must have exact lock mode F First Set when holder is the first to be granted for this lock H Holder Indicates that requested lock is granted p Priority Enqueue holder at the head of the queue t Try A "try" lock T Try 1CB A "try" lock that sends a callback W Wait Set while waiting for request to complete The most important holder flags are H (holder) and W (wait) as mentioned earlier, since they are set on granted lock requests and queued lock requests respectively. The ordering of the holders in the list is important. If there are any granted holders, they will always be at the head of the queue, followed by any queued holders. If there are no granted holders, then the first holder in the list will be the one that triggers the state change. Since demote requests are always considered higher priority than requests from the file system, that might not always directly result in a change to the state requested. The glock subsystem supports two kinds of "try" lock. These are useful both because they allow the taking of locks out of the normal order (with suitable back-off and retry) and because they can be used to help avoid resources in use by other nodes. The normal t (try) lock is just what its name indicates; it is a "try" lock that does not do anything special. The T ( try 1CB ) lock, on the other hand, is identical to the t lock except that the DLM will send a single callback to current incompatible lock holders. One use of the T ( try 1CB ) lock is with the iopen locks, which are used to arbitrate among the nodes when an inode's i_nlink count is zero, and determine which of the nodes will be responsible for deallocating the inode. The iopen glock is normally held in the shared state, but when the i_nlink count becomes zero and ->evict_inode () is called, it will request an exclusive lock with T ( try 1CB ) set. It will continue to deallocate the inode if the lock is granted. If the lock is not granted it will result in the node(s) which were preventing the grant of the lock marking their glock(s) with the D (demote) flag, which is checked at ->drop_inode () time in order to ensure that the deallocation is not forgotten. This means that inodes that have zero link count but are still open will be deallocated by the node on which the final close () occurs. Also, at the same time as the inode's link count is decremented to zero the inode is marked as being in the special state of having zero link count but still in use in the resource group bitmap. This functions like the ext3 file system3's orphan list in that it allows any subsequent reader of the bitmap to know that there is potentially space that might be reclaimed, and to attempt to reclaim it. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/ap-glock-holders-gfs2 |
Chapter 3. Configuring external alertmanager instances | Chapter 3. Configuring external alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances by configuring the cluster-monitoring-config config map in either the openshift-monitoring project or the user-workload-monitoring-config project. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have installed the OpenShift CLI ( oc ). If you are configuring core OpenShift Container Platform monitoring components in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config config map. Procedure Edit the ConfigMap object. To configure additional Alertmanagers for routing alerts from core OpenShift Container Platform projects : Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add an additionalAlertmanagerConfigs: section under data/config.yaml/prometheusK8s . Add the configuration details for additional Alertmanagers in this section: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> For <alertmanager_specification> , substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com To configure additional Alertmanager instances for routing alerts from user-defined projects : Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a <component>/additionalAlertmanagerConfigs: section under data/config.yaml/ . Add the configuration details for additional Alertmanagers in this section: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification> For <component> , substitute one of two supported external Alertmanager components: prometheus or thanosRuler . For <alertmanager_specification> , substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager using Thanos Ruler with a bearer token and client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Save the file to apply the changes to the ConfigMap object. The new component placement configuration is applied automatically. 3.1. Attaching additional labels to your time series and alerts Using the external labels feature of Prometheus, you can attach custom labels to all time series and alerts leaving Prometheus. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. For example, to add metadata about the region and environment to all time series and alerts, use: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The new configuration is applied automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. Enabling monitoring for user-defined projects 3.2. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, Thanos Querier, and Thanos Ruler. The following log levels can be applied to the relevant component in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites If you are setting a log level for Alertmanager, Prometheus Operator, Prometheus, or Thanos Querier in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are setting a log level for Prometheus Operator, Prometheus, or Thanos Ruler in the openshift-user-workload-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To set a log level for a component in the openshift-monitoring project : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. For default platform monitoring, available component values are prometheusK8s , alertmanagerMain , prometheusOperator , and thanosQuerier . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . To set a log level for a component in the openshift-user-workload-monitoring project : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. For user workload monitoring, available component values are prometheus , prometheusOperator , and thanosRuler . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods for the component restarts automatically when you apply the log-level change. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Confirm that the log-level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level in the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized loglevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 3.3. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. You can do so for default platform monitoring and for user-defined workload monitoring. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have installed the OpenShift CLI ( oc ). If you are enabling the query log file feature for Prometheus in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are enabling the query log file feature for Prometheus in the openshift-user-workload-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. Procedure To set the query log file for Prometheus in the openshift-monitoring project : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add queryLogFile: <path> for prometheusK8s under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1 1 The full path to the file in which queries will be logged. Save the file to apply the changes. Warning When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Verify that the pods for the component are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Read the query log: USD oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. To set the query log file for Prometheus in the openshift-user-workload-monitoring project : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add queryLogFile: <path> for prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1 1 The full path to the file in which queries will be logged. Save the file to apply the changes. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Verify that the pods for the component are running. The following example command lists the status of pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Read the query log: USD oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps See Enabling monitoring for user-defined projects for steps to enable user-defined monitoring. 3.4. Enabling query logging for Thanos Querier For default platform monitoring in the openshift-monitoring project, you can enable the Cluster Monitoring Operator to log all queries run by Thanos Querier. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure You can enable query logging for Thanos Querier in the openshift-monitoring project: Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a thanosQuerier section under data/config.yaml and add values as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2 1 Set the value to true to enable logging and false to disable logging. The default value is false . 2 Set the value to debug , info , warn , or error . If no value exists for logLevel , the log level defaults to error . Save the file to apply the changes. Warning When you save changes to a monitoring config map, pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Verification Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Run a test query using the following sample commands as a model: USD token=`oc sa get-token prometheus-k8s -n openshift-monitoring` USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer USDtoken" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version' Run the following command to read the query log: USD oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query Note Because the thanos-querier pods are highly available (HA) pods, you might be able to see logs in only one pod. After you examine the logged query information, disable query logging by changing the enableRequestLogging value to false in the config map. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. | [
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1",
"oc -n openshift-monitoring get pods",
"oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2",
"oc -n openshift-monitoring get pods",
"token=`oc sa get-token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'",
"oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/monitoring/monitoring-configuring-external-alertmanagers_configuring-the-monitoring-stack |
7.124. mcelog | 7.124. mcelog 7.124.1. RHBA-2015:1303 - mcelog bug fix and enhancement update Updated mcelog packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The mcelog packages contain a daemon that collects and decodes Machine Check Exception (MCE) data on AMD64 and Intel 64 machines. Note The mcelog packages have been upgraded to upstream version 109, which provides a number of bug fixes and enhancements over the version. Notably, mcelog now supports Intel Core i7 CPU architectures. (BZ# 1145371 ) Users of mcelog are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-mcelog |
13.6. Virtual Disk Tasks | 13.6. Virtual Disk Tasks 13.6.1. Creating a Virtual Disk Image disk creation is managed entirely by the Manager. Direct LUN disks require externally prepared targets that already exist. Cinder disks require access to an instance of OpenStack Volume that has been added to the Red Hat Virtualization environment using the External Providers window; see Section 14.2.4, "Adding an OpenStack Block Storage (Cinder) Instance for Storage Management" for more information. You can create a virtual disk that is attached to a specific virtual machine. Additional options are available when creating an attached virtual disk, as specified in Section 13.6.2, "Explanation of Settings in the New Virtual Disk Window" . Creating a Virtual Disk Attached to a Virtual Machine Click Compute Virtual Machines . Click the virtual machine's name to open the details view. Click the Disks tab. Click New . Click the appropriate button to specify whether the virtual disk will be an Image , Direct LUN , or Cinder disk. Select the options required for your virtual disk. The options change based on the disk type selected. See Section 13.6.2, "Explanation of Settings in the New Virtual Disk Window" for more details on each option for each disk type. Click OK . You can also create a floating virtual disk that does not belong to any virtual machines. You can attach this disk to a single virtual machine, or to multiple virtual machines if the disk is shareable. Some options are not available when creating a virtual disk, as specified in Section 13.6.2, "Explanation of Settings in the New Virtual Disk Window" . Creating a Floating Virtual Disk Important Creating floating virtual disks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . Click Storage Disks . Click New . Click the appropriate button to specify whether the virtual disk will be an Image , Direct LUN , or Cinder disk. Select the options required for your virtual disk. The options change based on the disk type selected. See Section 13.6.2, "Explanation of Settings in the New Virtual Disk Window" for more details on each option for each disk type. Click OK . 13.6.2. Explanation of Settings in the New Virtual Disk Window Because the New Virtual Disk windows for creating floating and attached virtual disks are very similar, their settings are described in a single section. Table 13.2. New Virtual Disk and Edit Virtual Disk Settings: Image Field Name Description Size(GB) The size of the new virtual disk in GB. Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. Interface This field only appears when creating an attached disk. The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center This field only appears when creating a floating disk. The data center in which the virtual disk will be available. Storage Domain The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain. Allocation Policy The provisioning policy for the new virtual disk. Preallocated allocates the entire size of the disk on the storage domain at the time the virtual disk is created. The virtual size and the actual size of a preallocated disk are the same. Preallocated virtual disks take more time to create than thin provisioned virtual disks, but have better read and write performance. Preallocated virtual disks are recommended for servers and other I/O intensive virtual machines. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible. Thin Provision allocates 1 GB at the time the virtual disk is created and sets a maximum limit on the size to which the disk can grow. The virtual size of the disk is the maximum limit; the actual size of the disk is the space that has been allocated so far. Thin provisioned disks are faster to create than preallocated disks and allow for storage over-commitment. Thin provisioned virtual disks are recommended for desktops. Disk Profile The disk profile assigned to the virtual disk. Disk profiles define the maximum amount of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are defined on the storage domain level based on storage quality of service entries created for data centers. Activate Disk(s) This field only appears when creating an attached disk. Activate the virtual disk immediately after creation. Wipe After Delete Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted. Bootable This field only appears when creating an attached disk. Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read-Only This field only appears when creating an attached disk. Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. Enable Discard This field only appears when creating an attached disk. Allows you to shrink a thin provisioned disk while the virtual machine is up. For block storage, the underlying storage device must support discard calls, and the option cannot be used with Wipe After Delete unless the underlying storage supports the discard_zeroes_data property. For file storage, the underlying file system and the block device must support discard calls. If all requirements are met, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. The Direct LUN settings can be displayed in either Targets > LUNs or LUNs > Targets . Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs. Fill in the fields in the Discover Targets section and click Discover to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons to each LUN, select the LUN to add. Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data. The following considerations must be made when using a direct LUN as a virtual machine hard disk image: Live storage migration of direct LUN hard disk images is not supported. Direct LUN disks are not included in virtual machine exports. Direct LUN disks are not included in virtual machine snapshots. Table 13.3. New Virtual Disk and Edit Virtual Disk Settings: Direct LUN Field Name Description Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. By default the last 4 characters of the LUN ID is inserted into the field. The default behavior can be configured by setting the PopulateDirectLUNDiskDescriptionWithLUNId configuration key to the appropriate value using the engine-config command. The configuration key can be set to -1 for the full LUN ID to be used, or 0 for this feature to be ignored. A positive integer populates the description with the corresponding number of characters of the LUN ID. Interface This field only appears when creating an attached disk. The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center This field only appears when creating a floating disk. The data center in which the virtual disk will be available. Host The host on which the LUN will be mounted. You can select any host in the data center. Storage Type The type of external LUN to add. You can select from either iSCSI or Fibre Channel . Discover Targets This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected. Address - The host name or IP address of the target server. Port - The port by which to attempt a connection to the target server. The default port is 3260. User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs. CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. Activate Disk(s) This field only appears when creating an attached disk. Activate the virtual disk immediately after creation. Bootable This field only appears when creating an attached disk. Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read-Only This field only appears when creating an attached disk. Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. Enable Discard This field only appears when creating an attached disk. Allows you to shrink a thin provisioned disk while the virtual machine is up. With this option enabled, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. Enable SCSI Pass-Through This field only appears when creating an attached disk. Available when the Interface is set to VirtIO-SCSI . Selecting this check box enables passthrough of a physical SCSI device to the virtual disk. A VirtIO-SCSI interface with SCSI passthrough enabled automatically includes SCSI discard support. Read-Only is not supported when this check box is selected. When this check box is not selected, the virtual disk uses an emulated SCSI device. Read-Only is supported on emulated VirtIO-SCSI disks. Allow Privileged SCSI I/O This field only appears when creating an attached disk. Available when the Enable SCSI Pass-Through check box is selected. Selecting this check box enables unfiltered SCSI Generic I/O (SG_IO) access, allowing privileged SG_IO commands on the disk. This is required for persistent reservations. Using SCSI Reservation This field only appears when creating an attached disk. Available when the Enable SCSI Pass-Through and Allow Privileged SCSI I/O check boxes are selected. Selecting this check box disables migration for any virtual machine using this disk, to prevent virtual machines that are using SCSI reservation from losing access to the disk. The Cinder settings form will be disabled if there are no available OpenStack Volume storage domains on which you have permissions to create a disk in the relevant Data Center. Cinder disks require access to an instance of OpenStack Volume that has been added to the Red Hat Virtualization environment using the External Providers window; see Section 14.2.4, "Adding an OpenStack Block Storage (Cinder) Instance for Storage Management" for more information. Table 13.4. New Virtual Disk and Edit Virtual Disk Settings: Cinder Field Name Description Size(GB) The size of the new virtual disk in GB. Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. Interface This field only appears when creating an attached disk. The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center This field only appears when creating a floating disk. The data center in which the virtual disk will be available. Storage Domain The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain. Volume Type The volume type of the virtual disk. The drop-down list shows all available volume types. The volume type will be managed and configured on OpenStack Cinder. Activate Disk(s) This field only appears when creating an attached disk. Activate the virtual disk immediately after creation. Bootable This field only appears when creating an attached disk. Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read-Only This field only appears when creating an attached disk. Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. Important Mounting a journaled file system requires read-write access. Using the Read-Only option is not appropriate for virtual disks that contain such file systems (e.g. EXT3 , EXT4 , or XFS ). 13.6.3. Overview of Live Storage Migration Virtual disks can be migrated from one storage domain to another while the virtual machine to which they are attached is running. This is referred to as live storage migration. When a disk attached to a running virtual machine is migrated, a snapshot of that disk's image chain is created in the source storage domain, and the entire image chain is replicated in the destination storage domain. As such, ensure that you have sufficient storage space in both the source storage domain and the destination storage domain to host both the disk image chain and the snapshot. A new snapshot is created on each live storage migration attempt, even when the migration fails. Consider the following when using live storage migration: You can live migrate multiple disks at one time. Multiple disks for the same virtual machine can reside across more than one storage domain, but the image chain for each disk must reside on a single storage domain. You can live migrate disks between any two storage domains in the same data center. You cannot live migrate direct LUN hard disk images or disks marked as shareable. 13.6.4. Moving a Virtual Disk Move a virtual disk that is attached to a virtual machine or acts as a floating virtual disk from one storage domain to another. You can move a virtual disk that is attached to a running virtual machine; this is referred to as live storage migration. Alternatively, shut down the virtual machine before continuing. Consider the following when moving a disk: You can move multiple disks at the same time. You can move disks between any two storage domains in the same data center. If the virtual disk is attached to a virtual machine that was created based on a template and used the thin provisioning storage allocation option, you must copy the disks for the template on which the virtual machine was based to the same storage domain as the virtual disk. Moving a Virtual Disk Click Storage Disks and select one or more virtual disks to move. Click Move . From the Target list, select the storage domain to which the virtual disk(s) will be moved. From the Disk Profile list, select a profile for the disk(s), if applicable. Click OK . The virtual disks are moved to the target storage domain. During the move procedure, the Status column displays Locked and a progress bar indicating the progress of the move operation. 13.6.5. Changing the Disk Interface Type Users can change a disk's interface type after the disk has been created. This enables you to attach an existing disk to a virtual machine that requires a different interface type. For example, a disk using the VirtIO interface can be attached to a virtual machine requiring the VirtIO-SCSI or IDE interface. This provides flexibility to migrate disks for the purpose of backup and restore, or disaster recovery. The disk interface for shareable disks can also be updated per virtual machine. This means that each virtual machine that uses the shared disk can use a different interface type. To update a disk interface type, all virtual machines using the disk must first be stopped. Changing a Disk Interface Type Click Compute Virtual Machines and stop the appropriate virtual machine(s). Click the virtual machine's name to open the details view. Click the Disks tab and select the disk. Click Edit . From the Interface list, select the new interface type and click OK . You can attach a disk to a different virtual machine that requires a different interface type. Attaching a Disk to a Different Virtual Machine using a Different Interface Type Click Compute Virtual Machines and stop the appropriate virtual machine(s). Click the virtual machine's name to open the details view. Click the Disks tab and select the disk. Click Remove , then click OK . Go back to Virtual Machines and click the name of the new virtual machine that the disk will be attached to. Click the Disks tab, then click Attach . Select the disk in the Attach Virtual Disks window and select the appropriate interface from the Interface drop-down. Click OK . 13.6.6. Copying a Virtual Disk You can copy a virtual disk from one storage domain to another. The copied disk can be attached to virtual machines. Copying a Virtual Disk Click Storage Disks and select the virtual disk(s). Click Copy . Optionally, enter a new name in the Alias field. From the Target list, select the storage domain to which the virtual disk(s) will be copied. From the Disk Profile list, select a profile for the disk(s), if applicable. Click OK . The virtual disks have a status of Locked while being copied. 13.6.7. Uploading Images to a Data Storage Domain You can upload virtual disk images and ISO images to your data storage domain in the Administration Portal or with the REST API. See Section 11.8.1, "Uploading Images to a Data Storage Domain" . 13.6.8. Importing a Disk Image from an Imported Storage Domain Import floating virtual disks from an imported storage domain. Note Only QEMU-compatible disks can be imported into the Manager. Importing a Disk Image Click Storage Domains . Click the name of an imported storage domain to open the details view. Click the Disk Import tab. Select one or more disks and click Import . Select the appropriate Disk Profile for each disk. Click OK . 13.6.9. Importing an Unregistered Disk Image from an Imported Storage Domain Import floating virtual disks from a storage domain. Floating disks created outside of a Red Hat Virtualization environment are not registered with the Manager. Scan the storage domain to identify unregistered floating disks to be imported. Note Only QEMU-compatible disks can be imported into the Manager. Importing a Disk Image Click Storage Domains . Click the storage domain's name to open the details view. Click More Actions ( ), then click Scan Disks so that the Manager can identify unregistered disks. Click the Disk Import tab. Select one or more disk images and click Import . Select the appropriate Disk Profile for each disk. Click OK . 13.6.10. Importing a Virtual Disk from an OpenStack Image Service Virtual disks managed by an OpenStack Image Service can be imported into the Red Hat Virtualization Manager if that OpenStack Image Service has been added to the Manager as an external provider. Click Storage Domains . Click the OpenStack Image Service domain's name to open the details view. Click the Images tab and select an image. Click Import . Select the Data Center into which the image will be imported. From the Domain Name drop-down list, select the storage domain in which the image will be stored. Optionally, select a quota to apply to the image from the Quota drop-down list. Click OK . The disk can now be attached to a virtual machine. 13.6.11. Exporting a Virtual Disk to an OpenStack Image Service Virtual disks can be exported to an OpenStack Image Service that has been added to the Manager as an external provider. Important Virtual disks can only be exported if they do not have multiple volumes, are not thin provisioned, and do not have any snapshots. Click Storage Disks and select the disks to export. Click More Actions ( ), then click Export . From the Domain Name drop-down list, select the OpenStack Image Service to which the disks will be exported. From the Quota drop-down list, select a quota for the disks if a quota is to be applied. Click OK . 13.6.12. Reclaiming Virtual Disk Space Virtual disks that use thin provisioning do not automatically shrink after deleting files from them. For example, if the actual disk size is 100GB and you delete 50GB of files, the allocated disk size remains at 100GB, and the remaining 50GB is not returned to the host, and therefore cannot be used by other virtual machines. This unused disk space can be reclaimed by the host by performing a sparsify operation on the virtual machine's disks. This transfers the free space from the disk image to the host. You can sparsify multiple virtual disks in parallel. Red Hat recommends performing this operation before cloning a virtual machine, creating a template based on a virtual machine, or cleaning up a storage domain's disk space. Limitations NFS storage domains must use NFS version 4.2 or higher. You cannot sparsify a disk that uses a direct LUN or Cinder. You cannot sparsify a disk that uses a preallocated allocation policy. If you are creating a virtual machine from a template, you must select Thin from the Storage Allocation field, or if selecting Clone , ensure that the template is based on a virtual machine that has thin provisioning. You can only sparsify active snapshots. Sparsifying a Disk Click Compute Virtual Machines and shut down the required virtual machine. Click the virtual machine's name to open the details view. Click the Disks tab. Ensure that the disk's status is OK . Click More Actions ( ), then click Sparsify . Click OK . A Started to sparsify event appears in the Events tab during the sparsify operation and the disk's status displays as Locked . When the operation is complete, a Sparsified successfully event appears in the Events tab and the disk's status displays as OK . The unused disk space has been returned to the host and is available for use by other virtual machines. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Virtual_Disk_Tasks |
Chapter 3. Core Server Configuration Reference | Chapter 3. Core Server Configuration Reference The chapter provides an alphabetical reference for all core (server-related) attributes. Section 2.2.1.1, "Overview of the Directory Server Configuration" contains a good overview of the Red Hat Directory Server configuration files. 3.1. Core Server Configuration Attributes Reference This section contains reference information on the configuration attributes that are relevant to the core server functionality. For information on changing server configuration, see Section 2.2.1.2, "Accessing and Modifying Server Configuration" . For a list of server features that are implemented as plug-ins, see Section 4.1, "Server Plug-in Functionality Reference" . For help with implementing custom server functionality, contact Directory Server support. The configuration information stored in the dse.ldif file is organized as an information tree under the general configuration entry cn=config , as shown in the following diagram. Figure 3.1. Directory Information Tree Showing Configuration Data Most of these configuration tree nodes are covered in the following sections. The cn=plugins node is covered in Chapter 4, Plug-in Implemented Server Functionality Reference . The description of each attribute contains details such as the DN of its directory entry, its default value, the valid range of values, and an example of its use. Note Some of the entries and attributes described in this chapter may change in future releases of the product. 3.1.1. cn=config General configuration entries are stored in the cn=config entry. The cn=config entry is an instance of the nsslapdConfig object class, which in turn inherits from extensibleObject object class. 3.1.1.1. nsslapd-accesslog (Access Log) This attribute specifies the path and filename of the log used to record each LDAP access. The following information is recorded by default in the log file: IP address (IPv4 or IPv6) of the client machine that accessed the database. Operations performed (for example, search, add, and modify). Result of the access (for example, the number of entries returned or an error code). For more information on turning access logging off, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . For access logging to be enabled, this attribute must have a valid path and parameter, and the nsslapd-accesslog-logging-enabled configuration attribute must be switched to on . The table lists the four possible combinations of values for these two configuration attributes and their outcome in terms of disabling or enabling of access logging. Table 3.1. dse.ldif File Attributes Attribute Value Logging enabled or disabled nsslapd-accesslog-logging-enabled nsslapd-accesslog on empty string Disabled nsslapd-accesslog-logging-enabled nsslapd-accesslog on filename Enabled nsslapd-accesslog-logging-enabled nsslapd-accesslog off empty string Disabled nsslapd-accesslog-logging-enabled nsslapd-accesslog off filename Disabled Parameter Description Entry DN cn=config Valid Values Any valid filename. Default Value /var/log/dirsrv/slapd- instance /access Syntax DirectoryString Example nsslapd-accesslog: /var/log/dirsrv/slapd- instance /access 3.1.1.2. nsslapd-accesslog-level (Access Log Level) This attribute controls what is logged to the access log. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=config Valid Values * 0 - No access logging * 4 - Logging for internal access operations * 256 - Logging for connections, operations, and results * 512 - Logging for access to an entry and referrals * These values can be added together to provide the exact type of logging required; for example, 516 (4 + 512) to obtain internal access operation, entry access, and referral logging. Default Value 256 Syntax Integer Example nsslapd-accesslog-level: 256 3.1.1.3. nsslapd-accesslog-list (List of Access Log Files) This read-only attribute, which cannot be set, provides a list of access log files used in access log rotation. Parameter Description Entry DN cn=config Valid Values Default Value None Syntax DirectoryString Example nsslapd-accesslog-list: accesslog2,accesslog3 3.1.1.4. nsslapd-accesslog-logbuffering (Log Buffering) When set to off , the server writes all access log entries directly to disk. Buffering allows the server to use access logging even when under a heavy load without impacting performance. However, when debugging, it is sometimes useful to disable buffering in order to see the operations and their results right away instead of having to wait for the log entries to be flushed to the file. Disabling log buffering can severely impact performance in heavily loaded servers. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-accesslog-logbuffering: off 3.1.1.5. nsslapd-accesslog-logexpirationtime (Access Log Expiration Time) This attribute specifies the maximum age that a log file is allowed to reach before it is deleted. This attribute supplies only the number of units. The units are provided by the nsslapd-accesslog-logexpirationtimeunit attribute. Parameter Description Entry DN cn=config Valid Range -1 to the maximum 32 bit integer value (2147483647) A value of -1 or 0 means that the log never expires. Default Value -1 Syntax Integer Example nsslapd-accesslog-logexpirationtime: 2 3.1.1.6. nsslapd-accesslog-logexpirationtimeunit (Access Log Expiration Time Unit) This attribute specifies the units for nsslapd-accesslog-logexpirationtime attribute. If the unit is unknown by the server, then the log never expires. Parameter Description Entry DN cn=config Valid Values month | week | day Default Value month Syntax DirectoryString Example nsslapd-accesslog-logexpirationtimeunit: week 3.1.1.7. nsslapd-accesslog-logging-enabled (Access Log Enable Logging) Disables and enables accesslog logging but only in conjunction with the nsslapd-accesslog attribute that specifies the path and parameter of the log used to record each database access. For access logging to be enabled, this attribute must be switched to on , and the nsslapd-accesslog configuration attribute must have a valid path and parameter. The table lists the four possible combinations of values for these two configuration attributes and their outcome in terms of disabling or enabling of access logging. Table 3.2. dse.ldif Attributes Attribute Value Logging Enabled or Disabled nsslapd-accesslog-logging-enabled nsslapd-accesslog on empty string Disabled nsslapd-accesslog-logging-enabled nsslapd-accesslog on filename Enabled nsslapd-accesslog-logging-enabled nsslapd-accesslog off empty string Disabled nsslapd-accesslog-logging-enabled nsslapd-accesslog off filename Disabled Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-accesslog-logging-enabled: off 3.1.1.8. nsslapd-accesslog-logmaxdiskspace (Access Log Maximum Disk Space) This attribute specifies the maximum amount of disk space in megabytes that the access logs are allowed to consume. If this value is exceeded, the oldest access log is deleted. When setting a maximum disk space, consider the total number of log files that can be created due to log file rotation. Also, remember that there are three different log files (access log, audit log, and error log) maintained by the Directory Server, each of which consumes disk space. Compare these considerations to the total amount of disk space for the access log. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means that the disk space allowed to the access log is unlimited in size. Default Value 500 Syntax Integer Example nsslapd-accesslog-logmaxdiskspace: 500 3.1.1.9. nsslapd-accesslog-logminfreediskspace (Access Log Minimum Free Disk Space) This attribute sets the minimum allowed free disk space in megabytes. When the amount of free disk space falls below the value specified on this attribute, the oldest access logs are deleted until enough disk space is freed to satisfy this attribute. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647) Default Value -1 Syntax Integer Example nsslapd-accesslog-logminfreediskspace: -1 3.1.1.10. nsslapd-accesslog-logrotationsync-enabled (Access Log Rotation Sync Enabled) This attribute sets whether access log rotation is to be synchronized with a particular time of the day. Synchronizing log rotation this way can generate log files at a specified time during a day, such as midnight to midnight every day. This makes analysis of the log files much easier because they then map directly to the calendar. For access log rotation to be synchronized with time-of-day, this attribute must be enabled with the nsslapd-accesslog-logrotationsynchour and nsslapd-accesslog-logrotationsyncmin attribute values set to the hour and minute of the day for rotating log files. For example, to rotate access log files every day at midnight, enable this attribute by setting its value to on , and then set the values of the nsslapd-accesslog-logrotationsynchour and nsslapd-accesslog-logrotationsyncmin attributes to 0 . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-accesslog-logrotationsync-enabled: on 3.1.1.11. nsslapd-accesslog-logrotationsynchour (Access Log Rotation Sync Hour) This attribute sets the hour of the day for rotating access logs. This attribute must be used in conjunction with nsslapd-accesslog-logrotationsync-enabled and nsslapd-accesslog-logrotationsyncmin attributes. Parameter Description Entry DN cn=config Valid Range 0 through 23 Default Value 0 Syntax Integer Example nsslapd-accesslog-logrotationsynchour: 23 3.1.1.12. nsslapd-accesslog-logrotationsyncmin (Access Log Rotation Sync Minute) This attribute sets the minute of the day for rotating access logs. This attribute must be used in conjunction with nsslapd-accesslog-logrotationsync-enabled and nsslapd-accesslog-logrotationsynchour attributes. Parameter Description Entry DN cn=config Valid Range 0 through 59 Default Value 0 Syntax Integer Example nsslapd-accesslog-logrotationsyncmin: 30 3.1.1.13. nsslapd-accesslog-logrotationtime (Access Log Rotation Time) This attribute sets the time between access log file rotations. This attribute supplies only the number of units. The units (day, week, month, and so forth) are given by the nsslapd-accesslog-logrotationtimeunit attribute. Directory Server rotates the log at the first write operation after the configured interval has expired, regardless of the size of the log. Although it is not recommended for performance reasons to specify no log rotation since the log grows indefinitely, there are two ways of specifying this. Either set the nsslapd-accesslog-maxlogsperdir attribute value to 1 or set the nsslapd-accesslog-logrotationtime attribute to -1 . The server checks the nsslapd-accesslog-maxlogsperdir attribute first, and, if this attribute value is larger than 1 , the server then checks the nsslapd-accesslog-logrotationtime attribute. See Section 3.1.1.16, "nsslapd-accesslog-maxlogsperdir (Access Log Maximum Number of Log Files)" for more information. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means that the time between access log file rotation is unlimited. Default Value 1 Syntax Integer Example nsslapd-accesslog-logrotationtime: 100 3.1.1.14. nsslapd-accesslog-logrotationtimeunit (Access Log Rotation Time Unit) This attribute sets the units for the nsslapd-accesslog-logrotationtime attribute. Parameter Description Entry DN cn=config Valid Values month | week | day | hour | minute Default Value day Syntax DirectoryString Example nsslapd-accesslog-logrotationtimeunit: week 3.1.1.15. nsslapd-accesslog-maxlogsize (Access Log Maximum Log Size) This attribute sets the maximum access log size in megabytes. When this value is reached, the access log is rotated. That means the server starts writing log information to a new log file. If the nsslapd-accesslog-maxlogsperdir attribute is set to 1 , the server ignores this attribute. When setting a maximum log size, consider the total number of log files that can be created due to log file rotation. Also, remember that there are three different log files (access log, audit log, and error log) maintained by the Directory Server, each of which consumes disk space. Compare these considerations to the total amount of disk space for the access log. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means the log file is unlimited in size. Default Value 100 Syntax Integer Example nsslapd-accesslog-maxlogsize: 100 3.1.1.16. nsslapd-accesslog-maxlogsperdir (Access Log Maximum Number of Log Files) This attribute sets the total number of access logs that can be contained in the directory where the access log is stored. Each time the access log is rotated, a new log file is created. When the number of files contained in the access log directory exceeds the value stored in this attribute, then the oldest version of the log file is deleted. For performance reasons, Red Hat recommends not setting this value to 1 because the server does not rotate the log, and it grows indefinitely. If the value for this attribute is higher than 1 , then check the nsslapd-accesslog-logrotationtime attribute to establish whether log rotation is specified. If the nsslapd-accesslog-logrotationtime attribute has a value of -1 , then there is no log rotation. See Section 3.1.1.13, "nsslapd-accesslog-logrotationtime (Access Log Rotation Time)" for more information. Note that, depending on the values set in nsslapd-accesslog-logminfreediskspace and nsslapd-accesslog-maxlogsize , the actual number of logs could be less than what you configure in nsslapd-accesslog-maxlogsperdir . For example, if nsslapd-accesslog-maxlogsperdir uses the default (10 files) and you set nsslapd-accesslog-logminfreediskspace to 500 MB and nsslapd-accesslog-maxlogsize to 100 MB, Directory Server keeps only 5 access files. Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) Default Value 10 Syntax Integer Example nsslapd-accesslog-maxlogsperdir: 10 3.1.1.17. nsslapd-accesslog-mode (Access Log File Permission) This attribute sets the access mode or file permission with which access log files are to be created. The valid values are any combination of 000 to 777 (these mirror the numbered or absolute UNIX file permissions). The value must be a 3-digit number, the digits varying from 0 through 7 : 0 - None 1 - Execute only 2 - Write only 3 - Write and execute 4 - Read only 5 - Read and execute 6 - Read and write 7 - Read, write, and execute In the 3-digit number, the first digit represents the owner's permissions, the second digit represents the group's permissions, and the third digit represents everyone's permissions. When changing the default value, remember that 000 does not allow access to the logs and that allowing write permissions to everyone can result in the logs being overwritten or deleted by anyone. The newly configured access mode only affects new logs that are created; the mode is set when the log rotates to a new file. Parameter Description Entry DN cn=config Valid Range 000 through 777 Default Value 600 Syntax Integer Example nsslapd-accesslog-mode: 600 3.1.1.18. nsslapd-allow-anonymous-access If a user attempts to connect to the Directory Server without supplying any bind DN or password, this is an anonymous bind . Anonymous binds simplify common search and read operations, like checking the directory for a phone number or email address, by not requiring users to authenticate to the directory first. However, there are risks with anonymous binds. Adequate ACIs must be in place to restrict access to sensitive information and to disallow actions like modifies and deletes. Additionally, anonymous binds can be used for denial of service attacks or for malicious people to gain access to the server. Anonymous binds can be disabled to increase security (off). By default, anonymous binds are allowed (on) for search and read operations. This allows access to regular directory entries , which includes user and group entries as well as configuration entries like the root DSE. A third option, rootdse , allows anonymous search and read access to search the root DSE itself, but restricts access to all other directory entries. Optionally, resource limits can be placed on anonymous binds using the nsslapd-anonlimitsdn attribute as described in Section 3.1.1.22, "nsslapd-anonlimitsdn" . Changes to this value will not take effect until the server is restarted. Parameter Description Entry DN cn=config Valid Values on | off | rootdse Default Value on Syntax DirectoryString Example nsslapd-allow-anonymous-access: on 3.1.1.19. nsslapd-allow-hashed-passwords This parameter disables the pre-hashed password checks. By default, the Directory Server does not allow pre-hashed passwords to be set by anyone other than the Directory Manager. You can delegate this privilege to other users when you add them to the Password Administrators group. However in some scenarios, like when the replication partner already controls the pre-hashed passwords checking, this feature has to be disabled on the Directory Server. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-allow-hashed-passwords: off 3.1.1.20. nsslapd-allow-unauthenticated-binds Unauthenticated binds are connections to Directory Server where a user supplies an empty password. Using the default settings, Directory Server denies access in this scenario for security reasons. Warning Red Hat recommends not enabling unauthenticated binds. This authentication method enables users to bind without supplying a password as any account, including the Directory Manager. After the bind, the user can access all data with the permissions of the account used to bind. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-allow-unauthenticated-binds: off 3.1.1.21. nsslapd-allowed-sasl-mechanisms Per default, the root DSE lists all mechanisms the SASL library supports. However in some environments only certain ones are preferred. The nsslapd-allowed-sasl-mechanisms attribute allows you to enable only some defined SASL mechanisms. The mechanism names must consist of uppercase letters, numbers, and underscores. Each mechanism can be separated by commas or spaces. Note The EXTERNAL mechanism is actually not used by any SASL plug-in. It is internal to the server, and is mainly used for TLS client authentication. Hence, the EXTERNAL mechanism cannot be restricted or controlled. It will always appear in the supported mechanisms list, regardless what is set in the nsslapd-allowed-sasl-mechanisms attribute. This setting does not require a server restart to take effect. Parameter Description Entry DN cn=config Valid Values Any valid SASL mechanism Default Value None (all SASL mechanisms allowed) Syntax DirectoryString Example nsslapd-allowed-sasl-mechanisms: GSSAPI, DIGEST-MD5, OTP 3.1.1.22. nsslapd-anonlimitsdn Resource limits can be set on authenticated binds. The resource limits can set a cap on how many entries can be searched in a single operation ( nsslapd-sizeLimit ), a time limit ( nsslapd-timelimit ) and time out period ( nsslapd-idletimeout ) for searches, and the total number of entries that can be searched ( nsslapd-lookthroughlimit ). These resource limits prevent denial of service attacks from tying up directory resources and improve overall performance. Resource limits are set on a user entry. An anonymous bind, obviously, does not have a user entry associated with it. This means that resource limits usually do not apply to anonymous operations. To set resource limits for anonymous binds, a template entry can be created, with the appropriate resource limits. The nsslapd-anonlimitsdn configuration attribute can then be added that points to this entry and applies the resource limits to anonymous binds. Parameter Description Entry DN cn=config Valid Values Any DN Default Value None Syntax DirectoryString Example nsslapd-anonlimitsdn: cn=anon template,ou=people,dc=example,dc=com 3.1.1.23. nsslapd-attribute-name-exceptions This attribute allows non-standard characters in attribute names to be used for backwards compatibility with older servers, such as "_" in schema-defined attributes. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-attribute-name-exceptions: on 3.1.1.24. nsslapd-auditlog (Audit Log) This attribute sets the path and filename of the log used to record changes made to each database. Parameter Description Entry DN cn=config Valid Values Any valid filename Default Value /var/log/dirsrv/slapd- instance /audit Syntax DirectoryString Example nsslapd-auditlog: /var/log/dirsrv/slapd- instance /audit For audit logging to be enabled, this attribute must have a valid path and parameter, and the nsslapd-auditlog-logging-enabled configuration attribute must be switched to on . The table lists the four possible combinations of values for these two configuration attributes and their outcome in terms of disabling or enabling of audit logging. Table 3.3. Possible Combinations for nsslapd-auditlog Attributes in dse.ldif Value Logging enabled or disabled nsslapd-auditlog-logging-enabled nsslapd-auditlog on empty string Disabled nsslapd-auditlog-logging-enabled nsslapd-auditlog on filename Enabled nsslapd-auditlog-logging-enabled nsslapd-auditlog off empty string Disabled nsslapd-auditlog-logging-enabled nsslapd-auditlog off filename Disabled 3.1.1.25. nsslapd-auditlog-display-attrs With the nsslapd-auditlog-display-attrs attribute you can set attributes that Directory Server displays in the audit log to provide useful identifying information about the entry being modified. By adding attributes to the audit log, you can check the current state of certain attributes in the entry and details of the entry update. You can display attributes in the log by choosing one of the following options: To display a certain attribute of the entry that Directory Server modifies, provide the attribute name as a value. To display more than one attribute, provide the space separated list of attribute names as a value. To display all attributes of the entry, use an asterisk (*) as a value. Provide the space separated list of attributes that Directory Server must display in the audit log, or use an asterisk (*) as a value to display all attributes of an entry being modified. For example, you want to add cn attribute to the audit log output. When you set nsslapd-auditlog-display-attrs attribute to cn , the audit log displays the following output: Parameter Description Entry DN cn=config Valid Values Any valid attribute name. Use an asterisk (*) if you want to display all attributes of an entry in the audit log. Default Value None Syntax DirectoryString Example nsslapd-auditlog-display-attrs: cn ou 3.1.1.26. nsslapd-auditlog-list Provides a list of audit log files. Parameter Description Entry DN cn=config Valid Values Default Value None Syntax DirectoryString Example nsslapd-auditlog-list: auditlog2,auditlog3 3.1.1.27. nsslapd-auditlog-logexpirationtime (Audit Log Expiration Time) This attribute sets the maximum age that a log file is allowed to be before it is deleted. This attribute supplies only the number of units. The units (day, week, month, and so forth) are given by the nsslapd-auditlog-logexpirationtimeunit attribute. Parameter Description Entry DN cn=config Valid Range -1 to the maximum 32 bit integer value (2147483647) A value of -1 or 0 means that the log never expires. Default Value -1 Syntax Integer Example nsslapd-auditlog-logexpirationtime: 1 3.1.1.28. nsslapd-auditlog-logexpirationtimeunit (Audit Log Expiration Time Unit) This attribute sets the units for the nsslapd-auditlog-logexpirationtime attribute. If the unit is unknown by the server, then the log never expires. Parameter Description Entry DN cn=config Valid Values month | week | day Default Value week Syntax DirectoryString Example nsslapd-auditlog-logexpirationtimeunit: day 3.1.1.29. nsslapd-auditlog-logging-enabled (Audit Log Enable Logging) Turns audit logging on and off. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-auditlog-logging-enabled: off For audit logging to be enabled, this attribute must have a valid path and parameter and the nsslapd-auditlog-logging-enabled configuration attribute must be switched to on . The table lists the four possible combinations of values for these two configuration attributes and their outcome in terms of disabling or enabling of audit logging. Table 3.4. Possible combinations for nsslapd-auditlog and nsslapd-auditlog-logging-enabled Attribute Value Logging enabled or disabled nsslapd-auditlog-logging-enabled nsslapd-auditlog on empty string Disabled nsslapd-auditlog-logging-enabled nsslapd-auditlog on filename Enabled nsslapd-auditlog-logging-enabled nsslapd-auditlog off empty string Disabled nsslapd-auditlog-logging-enabled nsslapd-auditlog off filename Disabled 3.1.1.30. nsslapd-auditlog-logmaxdiskspace (Audit Log Maximum Disk Space) This attribute sets the maximum amount of disk space in megabytes that the audit logs are allowed to consume. If this value is exceeded, the oldest audit log is deleted. When setting a maximum disk space, consider the total number of log files that can be created due to log file rotation. Also remember that there are three different log files (access log, audit log, and error log) maintained by the Directory Server, each of which consumes disk space. Compare these considerations with the total amount of disk space for the audit log. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means that the disk space allowed to the audit log is unlimited in size. Default Value -1 Syntax Integer Example nsslapd-auditlog-logmaxdiskspace: 10000 3.1.1.31. nsslapd-auditlog-logminfreediskspace (Audit Log Minimum Free Disk Space) This attribute sets the minimum permissible free disk space in megabytes. When the amount of free disk space falls below the value specified by this attribute, the oldest audit logs are deleted until enough disk space is freed to satisfy this attribute. Parameter Description Entry DN cn=config Valid Range -1 (unlimited) | 1 to the maximum 32 bit integer value (2147483647) Default Value -1 Syntax Integer Example nsslapd-auditlog-logminfreediskspace: -1 3.1.1.32. nsslapd-auditlog-logrotationsync-enabled (Audit Log Rotation Sync Enabled) This attribute sets whether audit log rotation is to be synchronized with a particular time of the day. Synchronizing log rotation this way can generate log files at a specified time during a day, such as midnight to midnight every day. This makes analysis of the log files much easier because they then map directly to the calendar. For audit log rotation to be synchronized with time-of-day, this attribute must be enabled with the nsslapd-auditlog-logrotationsynchour and nsslapd-auditlog-logrotationsyncmin attribute values set to the hour and minute of the day for rotating log files. For example, to rotate audit log files every day at midnight, enable this attribute by setting its value to on , and then set the values of the nsslapd-auditlog-logrotationsynchour and nsslapd-auditlog-logrotationsyncmin attributes to 0 . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-auditlog-logrotationsync-enabled: on 3.1.1.33. nsslapd-auditlog-logrotationsynchour (Audit Log Rotation Sync Hour) This attribute sets the hour of the day for rotating audit logs. This attribute must be used in conjunction with nsslapd-auditlog-logrotationsync-enabled and nsslapd-auditlog-logrotationsyncmin attributes. Parameter Description Entry DN cn=config Valid Range 0 through 23 Default Value None (because nsslapd-auditlog-logrotationsync-enabled is off) Syntax Integer Example nsslapd-auditlog-logrotationsynchour: 23 3.1.1.34. nsslapd-auditlog-logrotationsyncmin (Audit Log Rotation Sync Minute) This attribute sets the minute of the day for rotating audit logs. This attribute must be used in conjunction with nsslapd-auditlog-logrotationsync-enabled and nsslapd-auditlog-logrotationsynchour attributes. Parameter Description Entry DN cn=config Valid Range 0 through 59 Default Value None (because nsslapd-auditlog-logrotationsync-enabled is off) Syntax Integer Example nsslapd-auditlog-logrotationsyncmin: 30 3.1.1.35. nsslapd-auditlog-logrotationtime (Audit Log Rotation Time) This attribute sets the time between audit log file rotations. This attribute supplies only the number of units. The units (day, week, month, and so forth) are given by the nsslapd-auditlog-logrotationtimeunit attribute. If the nsslapd-auditlog-maxlogsperdir attribute is set to 1 , the server ignores this attribute. Directory Server rotates the log at the first write operation after the configured interval has expired, regardless of the size of the log. Although it is not recommended for performance reasons to specify no log rotation, as the log grows indefinitely, there are two ways of specifying this. Either set the nsslapd-auditlog-maxlogsperdir attribute value to 1 or set the nsslapd-auditlog-logrotationtime attribute to -1 . The server checks the nsslapd-auditlog-maxlogsperdir attribute first, and, if this attribute value is larger than 1 , the server then checks the nsslapd-auditlog-logrotationtime attribute. See Section 3.1.1.38, "nsslapd-auditlog-maxlogsperdir (Audit Log Maximum Number of Log Files)" for more information. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means that the time between audit log file rotation is unlimited. Default Value 1 Syntax Integer Example nsslapd-auditlog-logrotationtime: 100 3.1.1.36. nsslapd-auditlog-logrotationtimeunit (Audit Log Rotation Time Unit) This attribute sets the units for the nsslapd-auditlog-logrotationtime attribute. Parameter Description Entry DN cn=config Valid Values month | week | day | hour | minute Default Value week Syntax DirectoryString Example nsslapd-auditlog-logrotationtimeunit: day 3.1.1.37. nsslapd-auditlog-maxlogsize (Audit Log Maximum Log Size) This attribute sets the maximum audit log size in megabytes. When this value is reached, the audit log is rotated. That means the server starts writing log information to a new log file. If nsslapd-auditlog-maxlogsperdir to 1 , the server ignores this attribute. When setting a maximum log size, consider the total number of log files that can be created due to log file rotation. Also, remember that there are three different log files (access log, audit log, and error log) maintained by the Directory Server, each of which consumes disk space. Compare these considerations to the total amount of disk space for the audit log. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means the log file is unlimited in size. Default Value 100 Syntax Integer Example nsslapd-auditlog-maxlogsize: 50 3.1.1.38. nsslapd-auditlog-maxlogsperdir (Audit Log Maximum Number of Log Files) This attribute sets the total number of audit logs that can be contained in the directory where the audit log is stored. Each time the audit log is rotated, a new log file is created. When the number of files contained in the audit log directory exceeds the value stored on this attribute, then the oldest version of the log file is deleted. The default is 1 log. If this default is accepted, the server will not rotate the log, and it grows indefinitely. If the value for this attribute is higher than 1 , then check the nsslapd-auditlog-logrotationtime attribute to establish whether log rotation is specified. If the nsslapd-auditlog-logrotationtime attribute has a value of -1 , then there is no log rotation. See Section 3.1.1.35, "nsslapd-auditlog-logrotationtime (Audit Log Rotation Time)" for more information. Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) Default Value 1 Syntax Integer Example nsslapd-auditlog-maxlogsperdir: 10 3.1.1.39. nsslapd-auditlog-mode (Audit Log File Permission) This attribute sets the access mode or file permissions with which audit log files are to be created. The valid values are any combination of 000 to 777 since they mirror numbered or absolute UNIX file permissions. The value must be a combination of a 3-digit number, the digits varying from 0 through 7 : 0 - None 1 - Execute only 2 - Write only 3 - Write and execute 4 - Read only 5 - Read and execute 6 - Read and write 7 - Read, write, and execute In the 3-digit number, the first digit represents the owner's permissions, the second digit represents the group's permissions, and the third digit represents everyone's permissions. When changing the default value, remember that 000 does not allow access to the logs and that allowing write permissions to everyone can result in the logs being overwritten or deleted by anyone. The newly configured access mode only affects new logs that are created; the mode is set when the log rotates to a new file. Parameter Description Entry DN cn=config Valid Range 000 through 777 Default Value 600 Syntax Integer Example nsslapd-auditlog-mode: 600 3.1.1.40. nsslapd-auditfaillog (Audit Fail Log) This attribute sets the path and filename of the log used to record failed LDAP modifications. If nsslapd-auditfaillog-logging-enabled is enabled, and nsslapd-auditfaillog is not set, the audit fail events are logged to the file specified in nsslapd-auditlog . If you set the nsslapd-auditfaillog parameter to the same path as nsslapd-auditlog , both are logged in the same file. Parameter Description Entry DN cn=config Valid Values Any valid filename Default Value /var/log/dirsrv/slapd- instance /audit Syntax DirectoryString Example nsslapd-auditfaillog: /var/log/dirsrv/slapd- instance /audit To enable the audit fail log, this attribute must have a valid path and the nsslapd-auditfaillog-logging-enabled attribute must be set to on 3.1.1.41. nsslapd-auditfaillog-list Provides a list of audit fail log files. Parameter Description Entry DN cn=config Valid Values Default Value None Syntax DirectoryString Example nsslapd-auditfaillog-list: auditfaillog2,auditfaillog3 3.1.1.42. nsslapd-auditfaillog-logexpirationtime (Audit Fail Log Expiration Time) This attribute sets the maximum age of a log file before it is removed. It supplies to the number of units. Specify the units, such as day, week, month, and so forth in the nsslapd-auditfaillog-logexpirationtimeunit attribute. Parameter Description Entry DN cn=config Valid Range -1 to the maximum 32 bit integer value (2147483647) A value of -1 or 0 means that the log never expires. Default Value -1 Syntax Integer Example nsslapd-auditfaillog-logexpirationtime: 1 3.1.1.43. nsslapd-auditfaillog-logexpirationtimeunit (Audit Fail Log Expiration Time Unit) This attribute sets the units for the nsslapd-auditfaillog-logexpirationtime attribute. If the unit is unknown by the server, the log never expires. Parameter Description Entry DN cn=config Valid Values month | week | day Default Value week Syntax DirectoryString Example nsslapd-auditfaillog-logexpirationtimeunit: day 3.1.1.44. nsslapd-auditfaillog-logging-enabled (Audit Fail Log Enable Logging) Turns on and off logging of failed LDAP modifications. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-auditfaillog-logging-enabled: off 3.1.1.45. nsslapd-auditfaillog-logmaxdiskspace (Audit Fail Log Maximum Disk Space) This attribute sets the maximum amount of disk space in megabytes the audit fail logs are can consume. If the size exceed the limit, the oldest audit fail log is deleted. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means that the disk space allowed to the audit fail log is unlimited in size. Default Value 100 Syntax Integer Example nsslapd-auditfaillog-logmaxdiskspace: 10000 3.1.1.46. nsslapd-auditfaillog-logminfreediskspace (Audit Fail Log Minimum Free Disk Space) This attribute sets the minimum permissible free disk space in megabytes. When the amount of free disk space is lower than the specified value, the oldest audit fail logs are deleted until enough disk space is freed. Parameter Description Entry DN cn=config Valid Range -1 (unlimited) | 1 to the maximum 32 bit integer value (2147483647) Default Value -1 Syntax Integer Example nsslapd-auditfaillog-logminfreediskspace: -1 3.1.1.47. nsslapd-auditfaillog-logrotationsync-enabled (Audit Fail Log Rotation Sync Enabled) This attribute sets whether audit fail log rotation is to be synchronized with a particular time of the day. Synchronizing log rotation this way can generate log files at a specified time during a day, such as midnight to midnight every day. This makes analysis of the log files much easier because they then map directly to the calendar. For audit fail log rotation to be synchronized with time-of-day, this attribute must be enabled with the nsslapd-auditfaillog-logrotationsynchour and nsslapd-auditfaillog-logrotationsyncmin attribute values set to the hour and minute of the day for rotating log files. For example, to rotate audit fail log files every day at midnight, enable this attribute by setting its value to on , and then set the values of the nsslapd-auditfaillog-logrotationsynchour and nsslapd-auditfaillog-logrotationsyncmin attributes to 0 . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-auditfaillog-logrotationsync-enabled: on 3.1.1.48. nsslapd-auditfaillog-logrotationsynchour (Audit Fail Log Rotation Sync Hour) This attribute sets the hour of the day the audit fail log is rotated. This attribute must be used in conjunction with nsslapd-auditfaillog-logrotationsync-enabled and nsslapd-auditfaillog-logrotationsyncmin attributes. Parameter Description Entry DN cn=config Valid Range 0 through 23 Default Value None (because nsslapd-auditfaillog-logrotationsync-enabled is off) Syntax Integer Example nsslapd-auditfaillog-logrotationsynchour: 23 3.1.1.49. nsslapd-auditfaillog-logrotationsyncmin (Audit Fail Log Rotation Sync Minute) This attribute sets the minute the audit fail log is rotated. This attribute must be used in conjunction with nsslapd-auditfaillog-logrotationsync-enabled and nsslapd-auditfaillog-logrotationsynchour attributes. Parameter Description Entry DN cn=config Valid Range 0 through 59 Default Value None (because nsslapd-auditfaillog-logrotationsync-enabled is off) Syntax Integer Example nsslapd-auditfaillog-logrotationsyncmin: 30 3.1.1.50. nsslapd-auditfaillog-logrotationtime (Audit Fail Log Rotation Time) This attribute sets the time between audit fail log file rotations. This attribute supplies only the number of units. The units (day, week, month, and so forth) are given by the nsslapd-auditfaillog-logrotationtimeunit attribute. If the nsslapd-auditfaillog-maxlogsperdir attribute is set to 1 , the server ignores this attribute. Directory Server rotates the log at the first write operation after the configured interval has expired, regardless of the size of the log. Although it is not recommended for performance reasons to specify no log rotation, as the log grows indefinitely, there are two ways of specifying this. Either set the nsslapd-auditfaillog-maxlogsperdir attribute value to 1 or set the nsslapd-auditfaillog-logrotationtime attribute to -1 . The server checks the nsslapd-auditfaillog-maxlogsperdir attribute first, and, if this attribute value is larger than 1 , the server then checks the nsslapd-auditfaillog-logrotationtime attribute. See Section 3.1.1.53, "nsslapd-auditfaillog-maxlogsperdir (Audit Fail Log Maximum Number of Log Files)" for more information. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means the time between audit fail log file rotation is unlimited. Default Value 1 Syntax Integer Example nsslapd-auditfaillog-logrotationtime: 100 3.1.1.51. nsslapd-auditfaillog-logrotationtimeunit (Audit Fail Log Rotation Time Unit) This attribute sets the units for the nsslapd-auditfaillog-logrotationtime attribute. Parameter Description Entry DN cn=config Valid Values month | week | day | hour | minute Default Value week Syntax DirectoryString Example nsslapd-auditfaillog-logrotationtimeunit: day 3.1.1.52. nsslapd-auditfaillog-maxlogsize (Audit Fail Log Maximum Log Size) This attribute sets the maximum audit fail log size in megabytes. When this value is reached, the audit fail log is rotated. That means the server starts writing log information to a new log file. If the nsslapd-auditfaillog-maxlogsperdir parameter is set to 1 , the server ignores this attribute. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means the log file is unlimited in size. Default Value 100 Syntax Integer Example nsslapd-auditfaillog-maxlogsize: 50 3.1.1.53. nsslapd-auditfaillog-maxlogsperdir (Audit Fail Log Maximum Number of Log Files) This attribute sets the total number of audit fail logs that can be contained in the directory where the audit log is stored. Each time the audit fail log is rotated, a new log file is created. When the number of files contained in the audit log directory exceeds the value stored on this attribute, then the oldest version of the log file is deleted. The default is 1 log. If this default is accepted, the server will not rotate the log, and it grows indefinitely. If the value for this attribute is higher than 1 , then check the nsslapd-auditfaillog-logrotationtime attribute to establish whether log rotation is specified. If the nsslapd-auditfaillog-logrotationtime attribute has a value of -1 , then there is no log rotation. See Section 3.1.1.50, "nsslapd-auditfaillog-logrotationtime (Audit Fail Log Rotation Time)" for more information. Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) Default Value 1 Syntax Integer Example nsslapd-auditfaillog-maxlogsperdir: 10 3.1.1.54. nsslapd-auditfaillog-mode (Audit Fail Log File Permission) This attribute sets the access mode or file permissions with which audit fail log files are to be created. The valid values are any combination of 000 to 777 since they mirror numbered or absolute UNIX file permissions. The value must be a combination of a 3-digit number, the digits varying from 0 through 7 : 0 - None 1 - Execute only 2 - Write only 3 - Write and execute 4 - Read only 5 - Read and execute 6 - Read and write 7 - Read, write, and execute In the 3-digit number, the first digit represents the owner's permissions, the second digit represents the group's permissions, and the third digit represents everyone's permissions. When changing the default value, remember that 000 does not allow access to the logs and that allowing write permissions to everyone can result in the logs being overwritten or deleted by anyone. The newly configured access mode only affects new logs that are created; the mode is set when the log rotates to a new file. Parameter Description Entry DN cn=config Valid Range 000 through 777 Default Value 600 Syntax Integer Example nsslapd-auditfaillog-mode: 600 3.1.1.55. nsslapd-bakdir (Default Backup Directory) This parameter sets the path to the default backup directory. The Directory Server user must have write permissions in the configured directory. This setting does not require a server restart to take effect. Parameter Description Entry DN cn=config Valid Values Any local directory path. Default Value /var/lib/dirsrv/slapd- instance /bak Syntax DirectoryString Example nsslapd-bakdir: /var/lib/dirsrv/slapd- instance /bak 3.1.1.56. nsslapd-certdir (Certificate and Key Database Directory) This parameter defines the full path to the directory that Directory Server uses to store the Network Security Services (NSS) database of the instance. This database contains the private keys and certificates of the instance. As a fallback, Directory Server extracts the private key and certificates to this directory, if the server cannot extract them to the /tmp/ directory in a private name space. For details about private name spaces, see the PrivateTmp parameter description in the systemd.exec(5) man page. The directory specified in nsslapd-certdir must be owned by the user ID of the server, and only this user ID must have read-write permissions in this directory. For security reasons, no other users should have permissions to read or write to this directory. The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=config Valid Values An absolute path Default Value /etc/dirsrv/slapd- instance_name / Syntax DirectoryString Example nsslapd-certdir: /etc/dirsrv/slapd- instance_name / 3.1.1.57. nsslapd-certmap-basedn (Certificate Map Search Base) This attribute can be used when client authentication is performed using TLS certificates in order to avoid limitations of the security subsystem certificate mapping, configured in the /etc/dirsrv/slapd- instance_name /certmap.conf file. Depending on the configuration in this file, the certificate mapping may be done using a directory subtree search based at the root DN. If the search is based at the root DN, then the nsslapd-certmap-basedn attribute may force the search to be based at some entry other than the root. The valid value for this attribute is the DN of the suffix or subtree to use for certificate mapping. Parameter Description Entry DN cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example nsslapd-certmap-basedn: ou=People,dc=example,dc=com 3.1.1.58. nsslapd-config This read-only attribute is the config DN. Parameter Description Entry DN cn=config Valid Values Any valid configuration DN Default Value Syntax DirectoryString Example nsslapd-config: cn=config 3.1.1.59. nsslapd-cn-uses-dn-syntax-in-dns This parameter allows you to enable a DN inside a CN value. The Directory Server DN normalizer follows RFC4514 and keeps a white space if the RDN attribute type is not based on the DN syntax. However the Directory Server's configuration entry sometimes uses a cn attribute to store a DN value. For example in dn: cn="dc=A,dc=com", cn=mapping tree,cn=config , the cn should be normalized following the DN syntax. If this configuration is required, enable the nsslapd-cn-uses-dn-syntax-in-dns parameter. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-cn-uses-dn-syntax-in-dns: off 3.1.1.60. nsslapd-connection-buffer This attribute sets the connection buffering behavior. Possible values: 0 : Disable buffering. Only single Protocol Data Units (PDU) are read at a time. 1 : Regular fixed size LDAP_SOCKET_IO_BUFFER_SIZE of 512 bytes. 2 : Adaptable buffer size. The value 2 provides a better performance if the client sends a large amount of data at once. This is, for example, the case for large add and modify operations, or when many asynchronous requests are received over a single connections like during a replication. Parameter Description Entry DN cn=config Valid Values 0 | 1 | 2 Default Value 1 Syntax Integer Example nsslapd-connection-buffer: 1 3.1.1.61. nsslapd-connection-nocanon This option allows you to enable or disable the SASL NOCANON flag. Disabling avoids the Directory Server looking up DNS reverse entries for outgoing connections. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-connection-nocanon: on 3.1.1.62. nsslapd-conntablesize This attribute sets the connection table size, which determines the total number of connections supported by the server. Increase the value of this attribute if Directory Server is refusing connections because it is out of connection slots. When this occurs, the Directory Server's error log file records the message Not listening for new connections - too many fds open . It may be necessary to increase the operating system limits for the number of open files and number of open files per process, and it may be necessary to increase the ulimit for the number of open files ( ulimit -n ) in the shell that starts Directory Server. The size of the connection table is cap with nsslapd-maxdescriptor . See Section 3.1.1.119, "nsslapd-maxdescriptors (Maximum File Descriptors)" for more information. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config Valid Values Operating-system dependent Default Value The maximum number of files that the Directory Server process can open. See the getdtablesize() glibc function. Syntax Integer Example nsslapd-conntablesize: 4093 3.1.1.63. nsslapd-counters The nsslapd-counters attribute enables and disables Directory Server database and server performance counters. There can be a performance impact by keeping track of the larger counters. Turning off 64-bit integers for counters can have a minimal improvement on performance, although it negatively affects long term statistics tracking. This parameter is enabled by default. To disable counters, stop the Directory Server, edit the dse.ldif file directly, and restart the server. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-counters: on 3.1.1.64. nsslapd-csnlogging This attribute sets whether change sequence numbers (CSNs), when available, are to be logged in the access log. By default, CSN logging is turned on. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-csnlogging: on 3.1.1.65. nsslapd-defaultnamingcontext This attribute gives the naming context, of all configured naming contexts, which clients should use by default as a search base. This value is copied over to the root DSE as the defaultNamingContext attribute, which allows clients to query the root DSE to obtain the context and then to initiate a search with the appropriate base. Parameter Description Entry DN cn=config Valid Values Any root suffix DN Default Value The default user suffix Syntax DN Example nsslapd-defaultnamingcontext: dc=example,dc=com 3.1.1.66. nsslapd-disk-monitoring This attribute enables a thread which runs every ten (10) seconds to check the available disk space on the disk or mount where the Directory Server database is running. If the available disk space drops below a configured threshold, then the server begins reducing logging levels, disabling access or audit logs, and deleting rotated logs. If that does not free enough available space, then the server shuts down gracefully (after a wanring and grace period). Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-disk-monitoring: on 3.1.1.67. nsslapd-disk-monitoring-grace-period Sets a grace period to wait before shutting down the server after it hits half of the disk space limit set in Section 3.1.1.70, "nsslapd-disk-monitoring-threshold" . This gives the administrator time to clean out the disk and prevent a shutdown. Parameter Description Entry DN cn=config Valid Values Any integer (sets value in minutes) Default Value 60 Syntax Integer Example nsslapd-disk-monitoring-grace-period: 45 3.1.1.68. nsslapd-disk-monitoring-logging-critical Sets whether to shut down the server if the log directories pass the halfway point set in the disk space limit, Section 3.1.1.70, "nsslapd-disk-monitoring-threshold" . If this is enabled, then logging is not disabled and rotated logs are not deleted as means of reducing disk usage by the server. The server simply goes toward a shutdown process. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-disk-monitoring-logging-critical: on 3.1.1.69. nsslapd-disk-monitoring-readonly-on-threshold If the free disk space reaches half of the value you set in the nsslapd-disk-monitoring-threshold parameter, Directory Server shuts down the instance after the grace period set in nsslapd-disk-monitoring-grace-period is reached. However, if the disk runs out of space before the instance is down, data can be corrupted. To prevent this problem, enable the nsslapd-disk-monitoring-readonly-on-threshold parameter, the Directory Server sets the instance to read-only mode when the threshold is reached. Important With this setting, Directory Server does not start if the free disk space is below half of the threshold configured in the nsslapd-disk-monitoring-threshold . The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-disk-monitoring-readonly-on-threshold: off 3.1.1.70. nsslapd-disk-monitoring-threshold Sets the threshold, in bytes, to use to evaluate whether the server has enough available disk space. Once the space reaches half of this threshold, then the server begins a shut down process. For example, if the threshold is 2MB (the default), then once the available disk space reaches 1MB, the server will begin to shut down. By default, the threshold is evaluated backs on the disk space used by the configuration, transaction, and database directories for the Directory Server instance. If the Section 3.1.1.68, "nsslapd-disk-monitoring-logging-critical" attribute is enabled, then the log directory is included in the evaluation. Parameter Description Entry DN cn=config Valid Values * 0 to the maximum 32-bit integer value (2147483647) on 32-bit systems * 0 to the maximum 64-bit integer value (9223372036854775807) on 64-bit systems Default Value 2000000 (2MB) Syntax DirectoryString Example nsslapd-disk-monitoring-threshold: 2000000 3.1.1.71. nsslapd-dn-validate-strict The Section 3.1.1.168, "nsslapd-syntaxcheck" attribute enables the server to verify that any new or modified attribute value matches the required syntax for that attribute. However, the syntax rules for DNs have grown increasingly strict. Attempting to enforce DN syntax rules in RFC 4514 could break many servers using older syntax definitions. By default, then nsslapd-syntaxcheck validates DNs using RFC 1779 or RFC 2253 . The nsslapd-dn-validate-strict attribute explicitly enables strict syntax validation for DNs, according to section 3 in RFC 4514 . If this attribute is set to off (the default), the server normalizes the value before checking it for syntax violations. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-dn-validate-strict: off 3.1.1.72. nsslapd-ds4-compatible-schema Makes the schema in cn=schema compatible with 4.x versions of Directory Server. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-ds4-compatible-schema: off 3.1.1.73. nsslapd-enable-turbo-mode The Directory Server turbo mode is a feature that enables a worker thread to be dedicated to a connection and continuously read incoming operations from that connection. This can improve the performance on very active connections, and the feature is enabled by default. Worker threads are processing the LDAP operation received by the server. The number of worker threads is defined in the nsslapd-threadnumber parameter. Every five seconds, each worker thread evaluates if the activity level of its current connection is one of the highest among all established connections. Directory Server measures the activity as the number of operations initiated since the last check, and switches a worker thread in turbo mode if the activity of the current connection is one of the highest. If you encounter long execution times ( etime value in log files) for bind operations, such as one second or longer, deactivating the turbo mode can improve the performance. However, in some cases, long bind times are a symptom of networking or hardware issues. In these situations, disabling the turbo mode does not result in improved performance. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-enable-turbo-mode: on 3.1.1.74. nsslapd-enable-upgrade-hash During a simple bind, Directory Server has access to the plain text password due to the nature of bind operations. If the nsslapd-enable-upgrade-hash parameter is enabled and a user authenticates, Directory Server checks if the userPassword attribute of the user uses the hashing algorithm set in the passwordStorageScheme attribute. If the algorithm is different, the server hashes the plain text password with the algorithm from passwordStorageScheme and updates the value of the user's userPassword attribute. For example, if you import a user entry with a password that is hashed using a weak algorithm, the server automatically re-hashes the passwords on the first login of the user using the algorithm set in passwordStorageScheme , which is, by default, PBKDF2_SHA256 . Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-enable-upgrade-hash: on 3.1.1.75. nsslapd-enquote-sup-oc (Enable Superior Object Class Enquoting) This attribute is deprecated and will be removed in a future version of Directory Server. This attribute controls whether quoting in the objectclass attributes contained in the cn=schema entry conforms to the quoting specified by Internet draft RFC 2252. By default, the Directory Server conforms to RFC 2252, which indicates that this value should not be quoted. Only very old clients need this value set to on , so leave it off . Turning this attribute on or off does not affect Directory Server Console. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-enquote-sup-oc: off 3.1.1.76. nsslapd-entryusn-global The nsslapd-entryusn-global parameter defines if the USN plug-in assigns unique update sequence numbers (USN) across all back end databases or to each database individually. For unique USNs across all back end databases, set this parameter to on . For further details, see Section 6.8, "entryusn" . You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-entryusn-global: off 3.1.1.77. nsslapd-entryusn-import-initval Entry update sequence numbers (USNs) are not preserved when entries are exported from one server and imported into another, including when initializing a database for replication. By default, the entry USNs for imported entries are set to zero. It is possible to configure a different initial value for entry USNs using nsslapd-entryusn-import-initval . This sets a starting USN which is used for all imported entries. There are two possible values for nsslapd-entryusn-import-initval : An integer, which is the explicit start number used for every imported entry. , which means that every imported entry uses whatever the highest entry USN value was on the server before the import operation, incremented by one. Parameter Description Entry DN cn=config Valid Values Any integer | Default Value Syntax DirectoryString Example nsslapd-entryusn-import-initval: 3.1.1.78. nsslapd-errorlog (Error Log) This attribute sets the path and filename of the log used to record error messages generated by the Directory Server. These messages can describe error conditions, but more often they contain informative conditions, such as: Server startup and shutdown times. The port number that the server uses. This log contains differing amounts of information depending on the current setting of the Log Level attribute. See Section 3.1.1.79, "nsslapd-errorlog-level (Error Log Level)" for more information. Parameter Description Entry DN cn=config Valid Values Any valid filename Default Value /var/log/dirsrv/slapd- instance /errors Syntax DirectoryString Example nsslapd-errorlog: /var/log/dirsrv/slapd- instance /errors For error logging to be enabled, this attribute must have a valid path and filename, and the nsslapd-errorlog-logging-enabled configuration attribute must be switched to on . The table lists the four possible combinations of values for these two configuration attributes and their outcome in terms of disabling or enabling of error logging. Table 3.5. Possible Combinations for nsslapd-errorlog Configuration Attributes Attributes in dse.ldif Value Logging enabled or disabled nsslapd-errorlog-logging-enabled nsslapd-errorlog on empty string Disabled nsslapd-errorlog-logging-enabled nsslapd-errorlog on filename Enabled nsslapd-errorlog-logging-enabled nsslapd-errorlog off empty string Disabled nsslapd-errorlog-logging-enabled nsslapd-errorlog off filename Disabled 3.1.1.79. nsslapd-errorlog-level (Error Log Level) This attribute sets the level of logging for the Directory Server. The log level is additive; that is, specifying a value of 3 includes both levels 1 and 2 . The default value for nsslapd-errorlog-level is 16384 . You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=config Valid Values * 1 - Trace function calls. Logs a message when the server enters and exits a function. * 2 - Debug packet handling. * 4 - Heavy trace output debugging. * 8 - Connection management. * 16 - Print out packets sent/received. * 32 - Search filter processing. * 64 - Config file processing. * 128 - Access control list processing. * 1024 - Log communications with shell databases. * 2048 - Log entry parsing debugging. * 4096 - Housekeeping thread debugging. * 8192 - Replication debugging. * 16384 - Default level of logging used for critical errors and other messages that are always written to the error log; for example, server startup messages. Messages at this level are always included in the error log, regardless of the log level setting. * 32768 - Database cache debugging. * 65536 - Server plug-in debugging. It writes an entry to the log file when a server plug-in calls slapi-log-error . * 262144 - Access control summary information, much less verbose than level 128 . This value is recommended for use when a summary of access control processing is needed. Use 128 for very detailed processing messages. * 524288 - LMDB database debugging. Default Value 16384 Syntax Integer Example nsslapd-errorlog-level: 8192 3.1.1.80. nsslapd-errorlog-list This read-only attribute provides a list of error log files. Parameter Description Entry DN cn=config Valid Values Default Value None Syntax DirectoryString Example nsslapd-errorlog-list: errorlog2,errorlog3 3.1.1.81. nsslapd-errorlog-logexpirationtime (Error Log Expiration Time) This attribute sets the maximum age that a log file is allowed to reach before it is deleted. This attribute supplies only the number of units. The units (day, week, month, and so forth) are given by the nsslapd-errorlog-logexpirationtimeunit attribute. Parameter Description Entry DN cn=config Valid Range -1 to the maximum 32 bit integer value (2147483647) A value of -1 or 0 means that the log never expires. Default Value -1 Syntax Integer Example nsslapd-errorlog-logexpirationtime: 1 3.1.1.82. nsslapd-errorlog-logexpirationtimeunit (Error Log Expiration Time Unit) This attribute sets the units for the nsslapd-errorlog-logexpirationtime attribute. If the unit is unknown by the server, then the log never expires. Parameter Description Entry DN cn=config Valid Values month | week | day Default Value month Syntax DirectoryString Example nsslapd-errorlog-logexpirationtimeunit: week 3.1.1.83. nsslapd-errorlog-logging-enabled (Enable Error Logging) Turns error logging on and off. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-errorlog-logging-enabled: on 3.1.1.84. nsslapd-errorlog-logmaxdiskspace (Error Log Maximum Disk Space) This attribute sets the maximum amount of disk space in megabytes that the error logs are allowed to consume. If this value is exceeded, the oldest error log is deleted. When setting a maximum disk space, consider the total number of log files that can be created due to log file rotation. Also, remember that there are three different log files (access log, audit log, and error log) maintained by the Directory Server, each of which consumes disk space. Compare these considerations to the total amount of disk space for the error log. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means that the disk space allowed to the error log is unlimited in size. Default Value 100 Syntax Integer Example nsslapd-errorlog-logmaxdiskspace: 10000 3.1.1.85. nsslapd-errorlog-logminfreediskspace (Error Log Minimum Free Disk Space) This attribute sets the minimum allowed free disk space in megabytes. When the amount of free disk space falls below the value specified on this attribute, the oldest error log is deleted until enough disk space is freed to satisfy this attribute. Parameter Description Entry DN cn=config Valid Range -1 (unlimited) | 1 to the maximum 32 bit integer value (2147483647) Default Value -1 Syntax Integer Example nsslapd-errorlog-logminfreediskspace: -1 3.1.1.86. nsslapd-errorlog-logrotationsync-enabled (Error Log Rotation Sync Enabled) This attribute sets whether error log rotation is to be synchronized with a particular time of the day. Synchronizing log rotation this way can generate log files at a specified time during a day, such as midnight to midnight every day. This makes analysis of the log files much easier because they then map directly to the calendar. For error log rotation to be synchronized with time-of-day, this attribute must be enabled with the nsslapd-errorlog-logrotationsynchour and nsslapd-errorlog-logrotationsyncmin attribute values set to the hour and minute of the day for rotating log files. For example, to rotate error log files every day at midnight, enable this attribute by setting its value to on , and then set the values of the nsslapd-errorlog-logrotationsynchour and nsslapd-errorlog-logrotationsyncmin attributes to 0 . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-errorlog-logrotationsync-enabled: on 3.1.1.87. nsslapd-errorlog-logrotationsynchour (Error Log Rotation Sync Hour) This attribute sets the hour of the day for rotating error logs. This attribute must be used in conjunction with nsslapd-errorlog-logrotationsync-enabled and nsslapd-errorlog-logrotationsyncmin attributes. Parameter Description Entry DN cn=config Valid Range 0 through 23 Default Value 0 Syntax Integer Example nsslapd-errorlog-logrotationsynchour: 23 3.1.1.88. nsslapd-errorlog-logrotationsyncmin (Error Log Rotation Sync Minute) This attribute sets the minute of the day for rotating error logs. This attribute must be used in conjunction with nsslapd-errorlog-logrotationsync-enabled and nsslapd-errorlog-logrotationsynchour attributes. Parameter Description Entry DN cn=config Valid Range 0 through 59 Default Value 0 Syntax Integer Example nsslapd-errorlog-logrotationsyncmin: 30 3.1.1.89. nsslapd-errorlog-logrotationtime (Error Log Rotation Time) This attribute sets the time between error log file rotations. This attribute supplies only the number of units. The units (day, week, month, and so forth) are given by the nsslapd-errorlog-logrotationtimeunit (Error Log Rotation Time Unit) attribute. Directory Server rotates the log at the first write operation after the configured interval has expired, regardless of the size of the log. Although it is not recommended for performance reasons to specify no log rotation, as the log grows indefinitely, there are two ways of specifying this. Either set the nsslapd-errorlog-maxlogsperdir attribute value to 1 or set the nsslapd-errorlog-logrotationtime attribute to -1 . The server checks the nsslapd-errorlog-maxlogsperdir attribute first, and, if this attribute value is larger than 1 , the server then checks the nsslapd-errorlog-logrotationtime attribute. See Section 3.1.1.92, "nsslapd-errorlog-maxlogsperdir (Maximum Number of Error Log Files)" for more information. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647), where a value of -1 means that the time between error log file rotation is unlimited). Default Value 1 Syntax Integer Example nsslapd-errorlog-logrotationtime: 100 3.1.1.90. nsslapd-errorlog-logrotationtimeunit (Error Log Rotation Time Unit) This attribute sets the units for nsslapd-errorlog-logrotationtime (Error Log Rotation Time). If the unit is unknown by the server, then the log never expires. Parameter Description Entry DN cn=config Valid Values month | week | day | hour | minute Default Value week Syntax DirectoryString Example nsslapd-errorlog-logrotationtimeunit: day 3.1.1.91. nsslapd-errorlog-maxlogsize (Maximum Error Log Size) This attribute sets the maximum error log size in megabytes. When this value is reached, the error log is rotated, and the server starts writing log information to a new log file. If nsslapd-errorlog-maxlogsperdir is set to 1 , the server ignores this attribute. When setting a maximum log size, consider the total number of log files that can be created due to log file rotation. Also, remember that there are three different log files (access log, audit log, and error log) maintained by the Directory Server, each of which consumes disk space. Compare these considerations to the total amount of disk space for the error log. Parameter Description Entry DN cn=config Valid Range -1 | 1 to the maximum 32 bit integer value (2147483647) where a value of -1 means the log file is unlimited in size. Default Value 100 Syntax Integer Example nsslapd-errorlog-maxlogsize: 100 3.1.1.92. nsslapd-errorlog-maxlogsperdir (Maximum Number of Error Log Files) This attribute sets the total number of error logs that can be contained in the directory where the error log is stored. Each time the error log is rotated, a new log file is created. When the number of files contained in the error log directory exceeds the value stored on this attribute, then the oldest version of the log file is deleted. The default is 1 log. If this default is accepted, the server does not rotate the log, and it grows indefinitely. If the value for this attribute is higher than 1 , then check the nsslapd-errorlog-logrotationtime attribute to establish whether log rotation is specified. If the nsslapd-errorlog-logrotationtime attribute has a value of -1 , then there is no log rotation. See Section 3.1.1.89, "nsslapd-errorlog-logrotationtime (Error Log Rotation Time)" for more information. Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) Default Value 1 Syntax Integer Example nsslapd-errorlog-maxlogsperdir: 10 3.1.1.93. nsslapd-errorlog-mode (Error Log File Permission) This attribute sets the access mode or file permissions with which error log files are to be created. The valid values are any combination of 000 to 777 since they mirror numbered or absolute UNIX file permissions. That is, the value must be a combination of a 3-digit number, the digits varying from 0 through 7 : 0 - None 1 - Execute only 2 - Write only 3 - Write and execute 4 - Read only 5 - Read and execute 6 - Read and write 7 - Read, write, and execute In the 3-digit number, the first digit represents the owner's permissions, the second digit represents the group's permissions, and the third digit represents everyone's permissions. When changing the default value, remember that 000 does not allow access to the logs and that allowing write permissions to everyone can result in the logs being overwritten or deleted by anyone. The newly configured access mode only affects new logs that are created; the mode is set when the log rotates to a new file. Parameter Description Entry DN cn=config Valid Range 000 through 777 Default Value 600 Syntax Integer Example nsslapd-errorlog-mode: 600 3.1.1.94. nsslapd-force-sasl-external When establishing a TLS connection, a client sends its certificate first and then issues a BIND request using the SASL/EXTERNAL mechanism. Using SASL/EXTERNAL tells the Directory Server to use the credentials in the certificate for the TLS handshake. However, some clients do not use SASL/EXTERNAL when they send their BIND request, so the Directory Server processes the bind as a simple authentication request or an anonymouse request and the TLS connection fails. The nsslapd-force-sasl-external attribute forces clients in certificate-based authentication to send the BIND request using the SASL/EXTERNAL method. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax String Example nsslapd-force-sasl-external: on 3.1.1.95. nsslapd-groupevalnestlevel This attribute is deprecated, and documented here only for historical purposes. The Access Control Plug-in does not use the value specified by the nsslapd-groupevalnestlevel attribute to set the number of levels of nesting that access control performs for group evaluation. Instead, the number of levels of nesting is hardcoded as 5 . Parameter Description Entry DN cn=config Valid Range 0 to 5 Default Value 5 Syntax Integer Example nsslapd-groupevalnestlevel: 5 3.1.1.96. nsslapd-haproxy-trusted-ip (HAProxy Trusted IP) The nsslapd-haproxy-trusted-ip attribute configures the list of trusted proxy servers. When you set nsslapd-haproxy-trusted-ip , Directory Server uses HAProxy protocol to receive client IP addresses via an additional TCP header to evaluate access control instructions (ACIs) correctly and log the client traffic. If an untrusted proxy server initiates a bind request, Directory Server rejects the request and records the following message to the error log file: Parameter Description Entry DN cn=config Valid Range IPv4 or IPv6 addresses Default Value Syntax DirectoryString Example nsslapd-haproxy-trusted-ip: 127.0.0.1 3.1.1.97. nsslapd-idletimeout (Default Idle Timeout) This attribute sets the amount of time in seconds after which an idle LDAP client connection is closed by the server. A value of 0 means that the server never closes idle connections. This setting applies to all connections and all users. Idle timeout is enforced when the connection table is walked, when poll() does not return zero. Therefore, a server with a single connection never enforces the idle timeout. Use the nsIdleTimeout operational attribute, which can be added to user entries, to override the value assigned to this attribute. For details, see the "Setting Resource Limits Based on the Bind DN" section in the Red Hat Directory Server Administration Guide . Note For very large databases, with millions of entries, this attribute must have a high enough value that the online initialization process can complete or replication will fail when the connection to the server times out. Alternatively, the nsIdleTimeout attribute can be set to a high value on the entry used as the supplier bind DN. Parameter Description Entry DN cn=config Valid Range 0 to the maximum 32 bit integer value (2147483647) Default Value 3600 Syntax Integer Example nsslapd-idletimeout: 3600 3.1.1.98. nsslapd-ignore-virtual-attrs This parameter allows to disable the virtual attribute lookup in a search entry. If you do not require virtual attributes, you can disable virtual attribute lookups in search results to increase the speed of searches. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-ignore-virtual-attrs: off 3.1.1.99. nsslapd-instancedir (Instance Directory) This attribute is deprecated. There are now separate configuration parameters for instance-specific paths, such as nsslapd-certdir and nsslapd-lockdir . See the documentation for the specific directory path that is set. 3.1.1.100. nsslapd-ioblocktimeout (IO Block Time Out) This attribute sets the amount of time in milliseconds after which the connection to a stalled LDAP client is closed. An LDAP client is considered to be stalled when it has not made any I/O progress for read or write operations. Parameter Description Entry DN cn=config Valid Range 0 to the maximum 32 bit integer value (2147483647) in ticks Default Value 10000 Syntax Integer Example nsslapd-ioblocktimeout: 10000 3.1.1.101. nsslapd-lastmod (Track Modification Time) This attribute sets whether the Directory Server maintains the creatorsName , createTimestamp , modifiersName , and modifyTimestamp operational attributes for newly created or updated entries. Important Red Hat recommends not disabling tracking these attributes. If disabled, entries do not get a unique ID assigned in the nsUniqueID attribute and replication does not work. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-lastmod: on 3.1.1.102. nsslapd-ldapiautobind (Enable Autobind) The nsslapd-ldapiautobind sets whether the server will allow users to autobind to Directory Server using LDAPI. Autobind maps the UID or GUID number of a system user to a Directory Server user, and automatically authenticates the user to Directory Server based on those credentials. The Directory Server connection occurs over UNIX socket. Along with enabling autobind, configuring autobind requires configuring mapping entries. The nsslapd-ldapimaprootdn maps a root user on the system to the Directory Manager. The nsslapd-ldapimaptoentries maps regular users to Directory Server users, based on the parameters defined in the nsslapd-ldapiuidnumbertype , nsslapd-ldapigidnumbertype , and nsslapd-ldapientrysearchbase attributes. Autobind can only be enabled if LDAPI is enabled, meaning the nsslapd-ldapilisten is on and the nsslapd-ldapifilepath attribute is set to an LDAPI socket. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-ldapiautobind: off 3.1.1.103. nsslapd-ldapientrysearchbase (Search Base for LDAPI Authentication Entries) With autobind, it is possible to map system users to Directory Server user entries, based on the system user's UID and GUID numbers. This requires setting Directory Server parameters for which attribute to use for the UID number ( nsslapd-ldapiuidnumbertype ) and GUID number ( nsslapd-ldapigidnumbertype ) and setting the search base to use to search for matching user entries. The nsslapd-ldapientrysearchbase gives the subtree to search for user entries to use for autobind. Parameter Description Entry DN cn=config Valid Values DN Default Value The suffix created when the server instance was created, such as dc=example,dc=com Syntax DN Example nsslapd-ldapientrysearchbase: ou=people,dc=example,dc=om 3.1.1.104. nsslapd-ldapifilepath (File Location for LDAPI Socket) LDAPI connects a user to an LDAP server over a UNIX socket rather than TCP. In order to configure LDAPI, the server must be configured to communicate over a UNIX socket. The UNIX socket to use is set in the nsslapd-ldapifilepath attribute. Parameter Description Entry DN cn=config Valid Values Any directory path Default Value /var/run/dirsrv/slapd-example.socket Syntax Case-exact string Example nsslapd-ldapifilepath: /var/run/slapd-example.socket 3.1.1.105. nsslapd-ldapigidnumbertype (Attribute Mapping for System GUID Number) Autobind can be used to authenticate system users to the server automatically and connect to the server using a UNIX socket. To map the system user to a Directory Server user for authentication, the system user's UID and GUID numbers should be mapped to be a Directory Server attribute. The nsslapd-ldapigidnumbertype attribute points to the Directory Server attribute to map system GUIDs to user entries. Users can only connect to the server with autobind if LDAPI is enabled ( nsslapd-ldapilisten and nsslapd-ldapifilepath ), autobind is enabled ( nsslapd-ldapiautobind ), and autobind mapping is enabled for regular users ( nsslapd-ldapimaptoentries ). Parameter Description Entry DN cn=config Valid Values Any Directory Server attribute Default Value gidNumber Syntax DirectoryString Example nsslapd-ldapigidnumbertype: gidNumber 3.1.1.106. nsslapd-ldapilisten (Enable LDAPI) The nsslapd-ldapilisten enables LDAPI connections to the Directory Server. LDAPI allows users to connect to the Directory Server over a UNIX socket rather than a standard TCP port. Along with enabling LDAPI by setting nsslapd-ldapilisten to on , there must also be a UNIX socket set for LDAPI in the nsslapd-ldapifilepath attribute. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-ldapilisten: on 3.1.1.107. nsslapd-ldapimaprootdn (Autobind Mapping for Root User) With autobind, a system user is mapped to a Directory Server user and then automatically authenticated to the Directory Server over a UNIX socket. The root system user (the user with a UID of 0) is mapped to whatever Directory Server entry is specified in the nsslapd-ldapimaprootdn attribute. Parameter Description Entry DN cn=config Valid Values Any DN Default Value cn=Directory Manager Syntax DN Example nsslapd-ldapimaprootdn: cn=Directory Manager 3.1.1.108. nsslapd-ldapimaptoentries (Enable Autobind Mapping for Regular Users) With autobind, a system user is mapped to a Directory Server user and then automatically authenticated to the Directory Server over a UNIX socket. This mapping is automatic for root users, but it must be enabled for regular system users through the nsslapd-ldapimaptoentries attribute. Setting this attribute to on enables mapping for regular system users to Directory Server entries. If this attribute is not enabled, then only root users can use autobind to authenticate to the Directory Server, and all other users connect anonymously. The mappings themselves are configured through the nsslapd-ldapiuidnumbertype and nsslapd-ldapigidnumbertype attributes, which map Directory Server attributes to the user's UID and GUID numbers. Users can only connect to the server with autobind if LDAPI is enabled ( nsslapd-ldapilisten and nsslapd-ldapifilepath ) and autobind is enabled ( nsslapd-ldapiautobind ). Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-ldapimaptoentries: on 3.1.1.109. nsslapd-ldapiuidnumbertype Autobind can be used to authenticate system users to the server automatically and connect to the server using a UNIX socket. To map the system user to a Directory Server user for authentication, the system user's UID and GUID numbers must be mapped to be a Directory Server attribute. The nsslapd-ldapiuidnumbertype attribute points to the Directory Server attribute to map system UIDs to user entries. Users can only connect to the server with autobind if LDAPI is enabled ( nsslapd-ldapilisten and nsslapd-ldapifilepath ), autobind is enabled ( nsslapd-ldapiautobind ), and autobind mapping is enabled for regular users ( nsslapd-ldapimaptoentries ). Parameter Description Entry DN cn=config Valid Values Any Directory Server attribute Default Value uidNumber Syntax DirectoryString Example nsslapd-ldapiuidnumbertype: uidNumber 3.1.1.110. nsslapd-ldifdir Directory Server exports files in LDAP Data Interchange Format (LDIF) format to the directory set in this parameter when using the db2ldif or db2ldif.pl . The directory must be owned by the Directory Server user and group. Only this user and group must have read and write access in this directory. The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=config Valid Values Any directory writable by the Directory Server user Default Value /var/lib/dirsrv/slapd- instance_name /ldif/ Syntax DirectoryString Example nsslapd-ldifdir: /var/lib/dirsrv/slapd- instance_name /ldif/ 3.1.1.111. nsslapd-listen-backlog-size This attribute sets the maximum of the socket connection backlog. The listen service sets the number of sockets available to receive incoming connections. The backlog setting sets a maximum length for how long the queue for the socket (sockfd) can grow before refusing connections. Parameter Description Entry DN cn=config Valid Values The maximum 64-bit integer value (9223372036854775807) Default Value 128 Syntax Integer Example nsslapd-listen-backlog-size: 128 3.1.1.112. nsslapd-listenhost (Listen to IP Address) This attribute allows multiple Directory Server instances to run on a multihomed machine (or makes it possible to limit listening to one interface of a multihomed machine). There can be multiple IP addresses associated with a single hos tname, and these IP addresses can be a mix of both IPv4 and IPv6. This parameter can be used to restrict the Directory Server instance to a single IP interface. If a host name is given as the nsslapd-listenhost value, then the Directory Server responds to requests for every interface associated with the host name. If a single IP interface (either IPv4 or IPv6) is given as the nsslapd-listenhost value, Directory Server only responds to requests sent to that specific interface. Either an IPv4 or IPv6 address can be used. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config Valid Values Any local host name, IPv4 or IPv6 address Default Value Syntax DirectoryString Example nsslapd-listenhost: ldap.example.com 3.1.1.113. nsslapd-localhost (Local Host) This attribute specifies the host machine on which the Directory Server runs. This attribute creates the referral URL that forms part of the MMR protocol. In a high-availability configuration with failover nodes, that referral should point to the virtual name of the cluster, not the local host name. Parameter Description Entry DN cn=config Valid Values Any fully qualified host name. Default Value Hostname of installed machine. Syntax DirectoryString Example nsslapd-localhost: phonebook.example.com 3.1.1.114. nsslapd-localuser (Local User) This attribute sets the user as whom the Directory Server runs. The group as which the user runs is derived from this attribute by examining the user's primary group. Should the user change, then all of the instance-specific files and directories for this instance need to be changed to be owned by the new user, using a tool such as chown . The value for the nsslapd-localuser is set initially when the server instance is configured. Parameter Description Entry DN cn=config Valid Values Any valid user Default Value Syntax DirectoryString Example nsslapd-localuser: dirsrv 3.1.1.115. nsslapd-lockdir (Server Lock File Directory) This is the full path to the directory the server uses for lock files. The default value is /var/lock/dirsrv/slapd- instance . Changes to this value will not take effect until the server is restarted. Parameter Description Entry DN cn=config Valid Values Absolute path to a directory owned by the server user ID with write access to the server ID Default Value /var/lock/dirsrv/slapd- instance Syntax DirectoryString Example nsslapd-lockdir: /var/lock/dirsrv/slapd- instance 3.1.1.116. nsslapd-localssf The nsslapd-localssf parameter sets the security strength factor (SSF) for LDAPI connections. Directory Server allows LDAPI connections only if the value set in nsslapd-localssf is greater or equal than the value set in the nsslapd-minssf parameter. Therefore, LDAPI connections meet the minimum SSF set in nsslapd-minssf . You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=config Valid Values 0 to the maximum 32-bit integer value (2147483647) Default Value 71 Syntax Integer Example nsslapd-localssf: 71 3.1.1.117. nsslapd-logging-hr-timestamps-enabled (Enable or Disable High-resolution Log Timestamps) Controls whether logs will use high resolution timestamps with nanosecond precision, or standard resolution timestamps with one second precision. Enabled by default. Set this option to off to revert log timestamps back to one second precision. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-logging-hr-timestamps-enabled: on 3.1.1.118. nsslapd-maxbersize (Maximum Message Size) Defines the maximum size in bytes allowed for an incoming message. This limits the size of LDAP requests that can be handled by the Directory Server. Limiting the size of requests prevents some kinds of denial of service attacks. The limit applies to the total size of the LDAP request. For example, if the request is to add an entry and if the entry in the request is larger than the configured value or the default, then the add request is denied. However, the limit is not applied to replication processes. Be cautious before changing this attribute. This setting does not require a server restart to take effect. Parameter Description Entry DN cn=config Valid Range 0 - 2 gigabytes (2,147,483,647 bytes) Zero 0 means that the default value should be used. Default Value 2097152 Syntax Integer Example nsslapd-maxbersize: 2097152 3.1.1.119. nsslapd-maxdescriptors (Maximum File Descriptors) This attribute sets the maximum, platform-dependent number of file descriptors that the Directory Server tries to use. A file descriptor is used whenever a client connects to the server. File descriptors are also used by access logs, error logs, audit logs, database files (indexes and transaction logs), and as sockets for outgoing connections to other servers for replication and chaining. The number of descriptors available for TCP/IP to serve client connections is determined by the nsslapd-conntablesize attribute. The default value for this attribute is set to the file descriptor soft limit, which defaults to 1024. However, if you configure this attribute manually, the server updates the process file descriptor soft limit to match. If this value is set too high, the Directory Server queries the operating system for the maximum allowable value, and then uses that value. It also issues an information message in the error log. If this value is set to an invalid value remotely, by using the Directory Server Console or ldapmodify , the server rejects the new value, keeps the old value, and responds with an error. Some operating systems let users configure the number of file descriptors available to a process. See the operating system documentation for details on file descriptor limits and configuration. The dsktune program (explained in the Red Hat Directory Server Installation Guide ) can be used to suggest changes to the system kernel or TCP/IP tuning attributes, including increasing the number of file descriptors if necessary. Increased the value on this attribute if the Directory Server is refusing connections because it is out of file descriptors. When this occurs, the following message is written to the Directory Server's error log file: See Section 3.1.1.62, "nsslapd-conntablesize" for more information about increasing the number of incoming connections. Note UNIX shells usually have configurable limits on the number of file descriptors. See the operating system documentation for further information about limit and ulimit , as these limits can often cause problems. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config Valid Range 1 to 65535 Default Value 4096 Syntax Integer Example nsslapd-maxdescriptors: 4096 3.1.1.120. nsslapd-maxsasliosize (Maximum SASL Packet Size) When a user is authenticated to the Directory Server over SASL GSS-API, the server must allocate a certain amount of memory to the client to perform LDAP operations, according to how much memory the client requests. It is possible for an attacker to send such a large packet size that it crashes the Directory Server or ties it up indefinitely as part of a denial of service attack. The packet size which the Directory Server will allow for SASL clients can be limited using the nsslapd-maxsasliosize attribute. This attribute sets the maximum allowed SASL IO packet size that the server will accept. When an incoming SASL IO packet is larger than the nsslapd-maxsasliosize limit, the server immediately disconnects the client and logs a message to the error log, so that an administrator can adjust the setting if necessary. This attribute value is specified in bytes. Parameter Description Entry DN cn=config Valid Range * -1 (unlimited) to the maximum 32-bit integer value (2147483647) on 32-bit systems * -1 (unlimited) to the maximum 64-bit integer value (9223372036854775807) on 64-bit systems Default Value 2097152 (2MB) Syntax Integer Example nsslapd-maxsasliosize: 2097152 3.1.1.121. nsslapd-maxthreadsperconn (Maximum Threads per Connection) Defines the maximum number of threads that a connection should use. For normal operations where a client binds and only performs one or two operations before unbinding, use the default value. For situations where a client binds and simultaneously issues many requests, increase this value to allow each connection enough resources to perform all the operations. This attribute is not available from the server console. Parameter Description Entry DN cn=config Valid Range 1 to maximum threadnumber Default Value 5 Syntax Integer Example nsslapd-maxthreadsperconn: 5 3.1.1.122. nsslapd-minssf A security strength factor is a relative measurement of how strong a connection is according to its key strength. The SSF determines how secure an TLS or SASL connection is. The nsslapd-minssf attribute sets a minimum SSF requirement for any connection to the server; any connection attempts that are weaker than the minimum SSF are rejected. TLS and SASL connections can be mixed in a connection to the Directory Server. These connections generally have different SSFs. The higher of the two SSFs is used to compare to the minimum SSF requirement. Setting the SSF value to 0 means that there is no minimum setting. Parameter Description Entry DN cn=config Valid Values Any positive integer Default Value 0 (off) Syntax DirectoryString Example nsslapd-minssf: 128 3.1.1.123. nsslapd-minssf-exclude-rootdse A security strength factor is a relative measurement of how strong a connection is according to its key strength. The SSF determines how secure an TLS or SASL connection is. The nsslapd-minssf-exclude-rootdse attribute sets a minimum SSF requirement for any connection to the server except for queries for the root DSE . This enforces appropriate SSF values for most connections, while still allowing clients to get required information about the server configuration from the root DSE without having to establish a secure connection first. Parameter Description Entry DN cn=config Valid Values Any positive integer Default Value 0 (off) Syntax DirectoryString Example nsslapd-minssf-exclude-rootdse: 128 3.1.1.124. nsslapd-moddn-aci This parameter controls the ACI checks when directory entries are moved from one subtree to another and using source and target restrictions in moddn operations. For backward compatibility, you can disable the ACI checks. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-moddn-aci: on 3.1.1.125. nsslapd-malloc-mmap-threshold If a Directory Server instance is started as a service using the systemctl utility, environment variables are not passed to the server unless you set them in the /etc/sysconfig/dirsrv or /etc/sysconfig/dirsrv- instance_name file. For further details, see the systemd.exec (3) man page. Instead of manually editing the service files to set the M_MMAP_THRESHOLD environment variable, the nsslapd-malloc-mmap-threshold parameter enables you to set the value in the Directory Server configuration. For further details, see the M_MMAP_THRESHOLD parameter description in the mallopt (3) man page. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Range 0 - 33554432 Default Value See the M_MMAP_THRESHOLD parameter description in the mallopt (3) man page. Syntax Integer Example nsslapd-malloc-mmap-threshold: 33554432 3.1.1.126. nsslapd-malloc-mxfast If a Directory Server instance is started as a service using the systemctl utility, environment variables are not passed to the server unless you set them in the /etc/sysconfig/dirsrv or /etc/sysconfig/dirsrv- instance_name file. For further details, see the systemd.exec (3) man page. Instead of manually editing the service files to set the M_MXFAST environment variable, the nsslapd-malloc-mxfast parameter enables you to set the value in the Directory Server configuration. For further details, see the M_MXFAST parameter description in the mallopt (3) man page. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Range 0 - 80 * (sizeof(size_t) / 4) Default Value See the M_MXFAST parameter description in the mallopt (3) man page. Syntax Integer Example nsslapd-malloc-mxfast: 1048560 3.1.1.127. nsslapd-malloc-trim-threshold If a Directory Server instance is started as a service using the systemctl utility, environment variables are not passed to the server unless you set them in the /etc/sysconfig/dirsrv or /etc/sysconfig/dirsrv- instance_name file. For further details, see the systemd.exec (3) man page. Instead of manually editing the service files to set the M_TRIM_THRESHOLD environment variable, the nsslapd-malloc-trim-threshold parameter enables you to set the value in the Directory Server configuration. For further details, see the M_TRIM_THRESHOLD parameter description in the mallopt (3) man page. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Range 0 to 2^31-1 Default Value See the M_TRIM_THRESHOLD parameter description in the mallopt (3) man page. Syntax Integer Example nsslapd-malloc-trim-threshold: 131072 3.1.1.128. nsslapd-nagle When the value of this attribute is off , the TCP_NODELAY option is set so that LDAP responses (such as entries or result messages) are sent back to a client immediately. When the attribute is turned on, default TCP behavior applies; specifically, sending data is delayed so that additional data can be grouped into one packet of the underlying network MTU size, typically 1500 bytes for Ethernet. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-nagle: off 3.1.1.129. nsslapd-ndn-cache-enabled Normalizing distinguished names (DN) is a resource intensive task. If the nsslapd-ndn-cache-enabled parameter is enabled, Directory Server caches normalized DNs in memory. Update the nsslapd-ndn-cache-max-size parameter to set the maximum size of this cache. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-ndn-cache-enabled: on 3.1.1.130. nsslapd-ndn-cache-max-size Normalizing distinguished names (DN) is a resource intensive task. If the nsslapd-ndn-cache-enabled parameter is enabled, Directory Server caches normalized DNs in memory. The nsslapd-ndn-cache-max-size parameter sets the maximum size of this cache. If a DN requested is not cached already, it is normalized and added. When the cache size limit is exceeded, Directory Server removes the least recently used 10,000 DNs from the cache. However, a minimum of 10,000 DNs is always kept cached. Parameter Description Entry DN cn=config Valid Values 0 to the maximum 32-bit integer value (2147483647) Default Value 20971520 Syntax Integer Example nsslapd-ndn-cache-max-size: 20971520 3.1.1.131. nsslapd-outbound-ldap-io-timeout This attribute limits the I/O wait time for all outbound LDAP connections. The default is 300000 milliseconds (5 minutes). A value of 0 means that the server does not impose a limit on I/O wait time. Parameter Description Entry DN cn=config Valid Range 0 to the maximum 32-bit integer value (2147483647) Default Value 300000 Syntax DirectoryString Example nsslapd-outbound-ldap-io-timeout: 300000 3.1.1.132. nsslapd-pagedsizelimit (Size Limit for Simple Paged Results Searches) This attribute sets the maximum number of entries to return from a search operation specifically which uses the simple paged results control . This overrides the nsslapd-sizelimit attribute for paged searches. If this value is set to zero, then the nsslapd-sizelimit attribute is used for paged searches as well as non-paged searches. Parameter Description Entry DN cn=config Valid Range -1 to the maximum 32 bit integer value (2147483647) Default Value Syntax Integer Example nsslapd-pagedsizelimit: 10000 3.1.1.133. nsslapd-plug-in This read-only attribute lists the DNs of the plug-in entries for the syntax and matching rule plug-ins loaded by the server. 3.1.1.134. nsslapd-plugin-binddn-tracking Sets the bind DN used for an operation as the modifier of an entry, even if the operation itself was initiated by a server plug-in. The specific plug-in which performed the operation is listed in a separate operational attribute, internalModifiersname . One change can trigger other, automatic changes in the directory tree. When a user is deleted, for example, that user is automatically removed from any groups it belonged to by the Referential Integrity Plug-in. The initial deletion of the user is performed by whatever user account is bound to the server, but the updates to the groups (by default) are shown as being performed by the plug-in, with no information about which user initiated that update. The nsslapd-plugin-binddn-tracking attribute allows the server to track which user originated an update operation, as well as the internal plug-in which actually performed it. For example: This attribute is disabled by default. Parameter Description Entry DN cn=config Valid Range on | off Default Value off Syntax DirectoryString Example nsslapd-plugin-binddn-tracking: on 3.1.1.135. nsslapd-plugin-logging By default, even if access logging is set to record internal operations, plug-in internal operations are not logged in the access log file. Instead of enabling the logging in each plug-in's configuration, you can control it globally with this parameter. When enabled, plug-ins use this global setting and log access and audit events if enabled. If nsslapd-plugin-logging is enabled and nsslapd-accesslog-level is set to record internal operations, unindexed searches and other internal operations are logged into the access log file. In case nsslapd-plugin-logging is not set, unindexed searches from plug-ins are still logged in the Directory Server error log. Parameter Description Entry DN cn=config Valid Range on | off Default Value off Syntax DirectoryString Example nsslapd-plugin-logging: off 3.1.1.136. nsslapd-port (Port Number) This attribute gives the TCP/IP port number used for standard LDAP communications. To run TLS over this port, use the Start TLS extended operation. This selected port must be unique on the host system; make sure no other application is attempting to use the same port number. Specifying a port number of less than 1024 means the Directory Server has to be started as root . The server sets its uid to the nsslapd-localuser value after startup. When changing the port number for a configuration directory, the corresponding server instance entry in the configuration directory must be updated. The server has to be restarted for the port number change to be taken into account. Parameter Description Entry DN cn=config Valid Range 0 to 65535 Default Value 389 Syntax Integer Example nsslapd-port: 389 Note Set the port number to zero ( 0 ) to disable the LDAP port if the LDAPS port is enabled. 3.1.1.137. nsslapd-privatenamespaces This read-only attribute contains the list of the private naming contexts cn=config , cn=schema , and cn=monitor . Parameter Description Entry DN cn=config Valid Values cn=config, cn=schema, and cn=monitor Default Value Syntax DirectoryString Example nsslapd-privatenamespaces: cn=config 3.1.1.138. nsslapd-pwpolicy-inherit-global (Inherit Global Password Syntax) When the fine-grained password syntax is not set, new or updated passwords are not checked even though the global password syntax is configured. To inherit the global fine-grained password syntax, set this attribute to on . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-pwpolicy-inherit-global: off 3.1.1.139. nsslapd-pwpolicy-local (Enable Subtree- and User-Level Password Policy) Turns fine-grained (subtree- and user-level) password policy on and off. If this attribute has a value of off , all entries (except for cn=Directory Manager ) in the directory are subjected to the global password policy; the server ignores any defined subtree/user level password policy. If this attribute has a value of on , the server checks for password policies at the subtree- and user-level and enforce those policies. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-pwpolicy-local: off 3.1.1.140. nsslapd-readonly (Read Only) This attribute sets whether the whole server is in read-only mode, meaning that neither data in the databases nor configuration information can be modified. Any attempt to modify a database in read-only mode returns an error indicating that the server is unwilling to perform the operation. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-readonly: off 3.1.1.141. nsslapd-referral (Referral) This multi-valued attribute specifies the LDAP URLs to be returned by the suffix when the server receives a request for an entry not belonging to the local tree; that is, an entry whose suffix does not match the value specified on any of the suffix attributes. For example, assume the server contains only entries: but the request is for this entry: In this case, the referral would be passed back to the client in an attempt to allow the LDAP client to locate a server that contains the requested entry. Although only one referral is allowed per Directory Server instance, this referral can have multiple values. Note To use TLS communications, the referral attribute should be in the form ldaps:// server-location . Start TLS does not support referrals. For more information on managing referrals, see the "Configuring Directory Databases" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values Any valid LDAP URL Default Value Syntax DirectoryString Example nsslapd-referral: ldap://ldap.example.com/dc=example,dc=com 3.1.1.142. nsslapd-referralmode (Referral Mode) When set, this attribute sends back the referral for any request on any suffix. Parameter Description Entry DN cn=config Valid Values Any valid LDAP URL Default Value Syntax DirectoryString Example nsslapd-referralmode: ldap://ldap.example.com 3.1.1.143. nsslapd-require-secure-binds This parameter requires that a user authenticate to the directory over a protected connection such as TLS, StartTLS, or SASL, rather than a regular connection. Note This only applies to authenticated binds. Anonymous binds and unauthenticated binds can still be completed over a standard channel, even if nsslapd-require-secure-binds is turned on. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-require-secure-binds: on 3.1.1.144. nsslapd-requiresrestart This parameter lists what other core configuration attributes require that the server be restarted after a modification. This means that if any attribute listed in nsslapd-requiresrestart is changed, the new setting does not take effect until after the server is restarted. The list of attributes can be returned in an ldapsearch : This attribute is multi-valued. Parameter Description Entry DN cn=config Valid Values Any core server configuration attribute Default Value Syntax DirectoryString Example nsslapd-requiresrestart: nsslapd-cachesize 3.1.1.145. nsslapd-reservedescriptors (Reserved File Descriptors) This attribute specifies the number of file descriptors that Directory Server reserves for managing non-client connections, such as index management and managing replication. The number of file descriptors that the server reserves for this purpose subtracts from the total number of file descriptors available for servicing LDAP client connections (See Section 3.1.1.119, "nsslapd-maxdescriptors (Maximum File Descriptors)" ). Most installations of Directory Server should never need to change this attribute. However, consider increasing the value on this attribute if all of the following are true: The server is replicating to a large number of consumer servers (more than 10), or the server is maintaining a large number of index files (more than 30). The server is servicing a large number of LDAP connections. There are error messages reporting that the server is unable to open file descriptors (the actual error message differs depending on the operation that the server is attempting to perform), but these error messages are not related to managing client LDAP connections. Increasing the value on this attribute may result in more LDAP clients being unable to access the directory. Therefore, the value on this attribute is increased, also increase the value on the nsslapd-maxdescriptors attribute. It may not be possible to increase the nsslapd-maxdescriptors value if the server is already using the maximum number of file descriptors that the operating system allows a process to use; see the operating system documentation for details. If this is the case, then reduce the load on the server by causing LDAP clients to search alternative directory replicas. See Section 3.1.1.62, "nsslapd-conntablesize" for information about file descriptor usage for incoming connections. To assist in computing the number of file descriptors set for this attribute, use the following formula: NldbmBackends is the number of ldbm databases. NglobalIndex is the total number of configured indexes for all databases including system indexes. (By default 8 system indexes and 17 additional indexes per database). ReplicationDescriptor is eight (8) plus the number of replicas in the server that can act as a supplier or hub ( NSupplierReplica ). ChainingBackendDescriptors is NchainingBackend times the nsOperationConnectionsLimit (a chaining or database link configuration attribute; 10 by default). PTADescriptors is 3 if PTA is configured and 0 if PTA is not configured. SSLDescriptors is 5 (4 files + 1 listensocket) if TLS is configured and 0 if TLS is not configured. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config Valid Range 1 to 65535 Default Value 64 Syntax Integer Example nsslapd-reservedescriptors: 64 3.1.1.146. nsslapd-return-exact-case (Return Exact Case) Returns the exact case of attribute type names as requested by the client. Although LDAPv3-compliant clients must ignore the case of attribute names, some client applications require attribute names to match exactly the case of the attribute as it is listed in the schema when the attribute is returned by the Directory Server as the result of a search or modify operation. However, most client applications ignore the case of attributes; therefore, by default, this attribute is disabled. Do not modify it unless there are legacy clients that can check the case of attribute names in results returned from the server. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-return-exact-case: off 3.1.1.147. nsslapd-rewrite-rfc1274 This attribute is deprecated and will be removed in a later version. This attribute is used only for LDAPv2 clients that require attribute types to be returned with their RFC 1274 names. Set the value to on for those clients. The default is off . 3.1.1.148. nsslapd-rootdn (Manager DN) This attribute sets the distinguished name (DN) of an entry that is not subject to access control restrictions, administrative limit restrictions for operations on the directory, or resource limits in general. There does not have to be an entry corresponding to this DN, and by default there is not an entry for this DN, thus values like cn=Directory Manager are acceptable. For information on changing the root DN, see the "Creating Directory Entries" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values Any valid distinguished name Default Value Syntax DN Example nsslapd-rootdn: cn=Directory Manager 3.1.1.149. nsslapd-rootpw (Root Password) This attribute sets the password associated with the Manager DN. When the root password is provided, it is encrypted according to the encryption method selected for the nsslapd-rootpwstoragescheme attribute. When viewed from the server console, this attribute shows the value * . When viewed from the dse.ldif file, this attribute shows the encryption method followed by the encrypted string of the password. The example shows the password as displayed in the dse.ldif file, not the actual password. Warning When the root DN is configred at server setup, a root password is required. However, it is possible for the root password to be deleted from dse.ldif by directly editing the file. In this situation, the root DN can only obtain the same access to the directory is allowed for anonymous access. Always make sure that a root password is defined in dse.ldif when a root DN is configured for the database. The pwdhash command-line utility can create a new root password. For more information, see Section 9.6, "pwdhash" . Important When resetting the Directory Manager's password from the command line, do not use curly braces ( {} ) in the password. The root password is stored in the format {password-storage-scheme}hashed_password . Any characters in curly braces are interpreted by the server as the password storage scheme for the root password. If that text is not a valid storage scheme or if the password that follows is not properly hashed, then the Directory Manager cannot bind to the server. Parameter Description Entry DN cn=config Valid Values Any valid password, encrypted by any one of the encryption methods which are described in Section 4.1.43, "Password Storage Schemes" . Default Value Syntax DirectoryString { encryption_method } encrypted_Password Example nsslapd-rootpw: {SSHA}9Eko69APCJfF 3.1.1.150. nsslapd-rootpwstoragescheme (Root Password Storage Scheme) This attribute sets the method used to encrypt the Directory Server's manager password stored in the nsslapd-rootpw attribute. For further details, such as recommended strong password storage schemes, see Section 4.1.43, "Password Storage Schemes" . This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Values See Section 4.1.43, "Password Storage Schemes" . Default Value PBKDF2_SHA256 Syntax DirectoryString Example nsslapd-rootpwstoragescheme: PBKDF2_SHA256 3.1.1.151. nsslapd-rundir This parameter sets the absolute path to the directory in which Directory Server stores run-time information, such as the PID file. The directory must be owned by the Directory Server user and group. Only this user and group must have read and write access in this directory. The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=config Valid Values Any directory writable by the Directory Server user Default Value /var/run/dirsrv/ Syntax DirectoryString Example nsslapd-rundir: /var/run/dirsrv/ 3.1.1.152. nsslapd-sasl-mapping-fallback By default, only first matching SASL mapping is checked. If this mapping fails, the bind operation will fail even if there are other matching mappings that might have worked. SASL mapping fallback will keep checking all of the matching mappings. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-sasl-mapping-fallback: off 3.1.1.153. nsslapd-sasl-max-buffer-size This attribute sets the maximum SASL buffer size. Parameter Description Entry DN cn=config Valid Values 0 to the maximum 32 bit integer value (2147483647) Default Value 67108864 (64 kilobytes) Syntax Integer Example nsslapd-sasl-max-buffer-size: 67108864 3.1.1.154. nsslapd-saslpath Sets the absolute path to the directory containing the Cyrus-SASL SASL2 plug-ins. Setting this attribute allows the server to use custom or non-standard SASL plug-in libraries. This is usually set correctly during installation, and Red Hat strongly recommends not changing this attribute. If the attribute is not present or the value is empty, this means the Directory Server is using the system provided SASL plug-in libraries which are the correct version. If this parameter is set, the server uses the specified path for loading SASL plug-ins. If this parameter is not set, the server uses the SASL_PATH environment variable. If neither nsslapd-saslpath or SASL_PATH are set, the server attempts to load SASL plug-ins from the default location, /usr/lib/sasl2 . Changes made to this attribute will not take effect until the server is restarted. Parameter Description Entry DN cn=config Valid Values Path to plug-ins directory. Default Value Platform dependent Syntax DirectoryString Example nsslapd-saslpath: /usr/lib/sasl2 3.1.1.155. nsslapd-schema-ignore-trailing-spaces (Ignore Trailing Spaces in Object Class Names) Ignores trailing spaces in object class names. By default, the attribute is turned off. If the directory contains entries with object class values that end in one or more spaces, turn this attribute on. It is preferable to remove the trailing spaces because the LDAP standards do not allow them. For performance reasons, server restart is required for changes to take effect. An error is returned by default when object classes that include trailing spaces are added to an entry. Additionally, during operations such as add, modify, and import (when object classes are expanded and missing superiors are added) trailing spaces are ignored, if appropriate. This means that even when nsslapd-schema-ignore-trailing-spaces is on , a value such as top is not added if top is already there. An error message is logged and returned to the client if an object class is not found and it contains trailing spaces. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-schema-ignore-trailing-spaces: on 3.1.1.156. nsslapd-schemacheck (Schema Checking) This attribute sets whether the database schema is enforced when entries are added or modified. When this attribute has a value of on , Directory Server will not check the schema of existing entries until they are modified. The database schema defines the type of information allowed in the database. The default schema can be extended using the object classes and attribute types. For information on how to extend the schema using the Directory Server Console, see the "Extending the Directory Schema" chapter in the Red Hat Directory Server Administration Guide . Warning Red Hat strongly discourages turning off schema checking. This can lead to severe interoperability problems. This is typically used for very old or non-standard LDAP data that must be imported into the Directory Server. If there are not a lot of entries that have this problem, consider using the extensibleObject object class in those entries to disable schema checking on a per entry basis. Note Schema checking works by default when database modifications are made using an LDAP client, such as ldapmodify or when importing a database from LDIF using ldif2db . If schema checking is turned off, every entry has to be verified manually to see that they conform to the schema. If schema checking is turned on, the server sends an error message listing the entries which do not match the schema. Ensure that the attributes and object classes created in the LDIF statements are both spelled correctly and identified in dse.ldif . Either create an LDIF file in the schema directory or add the elements to 99user.ldif . Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-schemacheck: on 3.1.1.157. nsslapd-schemadir This is the absolute path to the directory containing the Directory Server instance-specific schema files. When the server starts up, it reads the schema files from this directory, and when the schema is modified through LDAP tools, the schema files in this directory are updated. This directory must be owned by the server user ID, and that user must have read and write permissions to the directory. Changes made to this attribute will not take effect until the server is restarted. Parameter Description Entry DN cn=config Valid Values Any valid path Default Value /etc/dirsrv/ instance_name /schema Syntax DirectoryString Example nsslapd-schemadir: /etc/dirsrv/ instance_name /schem 3.1.1.158. nsslapd-schemamod Online schema modifications require a lock protection that are impacting the performance. If schema modifications are disabled, setting this parameter to off can increase the performance. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-schemamod: on 3.1.1.159. nsslapd-schemareplace Determines whether modify operations that replace attribute values are allowed on the cn=schema entry. Parameter Description Entry DN cn=config Valid Values on | off | replication-only Default Value replication-only Syntax DirectoryString Example nsslapd-schemareplace: replication-only 3.1.1.160. nsslapd-search-return-original-type-switch If the attribute list passed to a search contains a space followed by other characters, the same string is returned to the client. For example: This behavior is disabled by default, but can be enabled using this configuration parameter. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-search-return-type-switch: off 3.1.1.161. nsslapd-securelistenhost This attribute allows multiple Directory Server instances to run on a multihomed machine (or makes it possible to limit listening to one interface of a multihomed machine). There can be multiple IP addresses associated with a single host name, and these IP addresses can be a mix of both IPv4 and IPv6. This parameter can be used to restrict the Directory Server instance to a single IP interface; this parameter also specifically sets what interface to use for TLS traffic rather than regular LDAP connections. If a host name is given as the nsslapd-securelistenhost value, then the Directory Server responds to requests for every interface associated with the host name. If a single IP interface (either IPv4 or IPv6) is given as the nsslapd-securelistenhost value, Directory Server only responds to requests sent to that specific interface. Either an IPv4 or IPv6 address can be used. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config Valid Values Any secure host name, IPv4 or IPv6 address Default Value Syntax DirectoryString Example nsslapd-securelistenhost: ldaps.example.com 3.1.1.162. nsslapd-securePort (Encrypted Port Number) This attribute sets the TCP/IP port number used for TLS communications. This selected port must be unique on the host system; make sure no other application is attempting to use the same port number. Specifying a port number of less than 1024 requires that Directory Server be started as root . The server sets its uid to the nsslapd-localuser value after startup. The server only listens to this port if it has been configured with a private key and a certificate, and nsslapd-security is set to on ; otherwise, it does not listen on this port. The server has to be restarted for the port number change to be taken into account. Parameter Description Entry DN cn=config Valid Range 1 to 65535 Default Value 636 Syntax Integer Example nsslapd-securePort: 636 3.1.1.163. nsslapd-security (Security) This attribute sets whether the Directory Server is to accept TLS communications on its encrypted port. This attribute should be set to on for secure connections. To run with security on, the server must be configured with a private key and server certificate in addition to the other TLS configuration. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-security: off 3.1.1.164. nsslapd-sizelimit (Size Limit) This attribute sets the maximum number of entries to return from a search operation. If this limit is reached, ns-slapd returns any entries it has located that match the search request, as well as an exceeded size limit error. When no limit is set, ns-slapd returns every matching entry to the client regardless of the number found. To set a no limit value whereby the Directory Server waits indefinitely for the search to complete, specify a value of -1 for this attribute in the dse.ldif file. This limit applies to everyone, regardless of their organization. Note A value of -1 on this attribute in dse.ldif file is the same as leaving the attribute blank in the server console, in that it causes no limit to be used. This cannot have a null value in dse.ldif file, as it is not a valid integer. It is possible to set it to 0 , which returns size limit exceeded for every search. The corresponding user-level attribute is nsSizeLimit . Parameter Description Entry DN cn=config Valid Range -1 to the maximum 32 bit integer value (2147483647) Default Value 2000 Syntax Integer Example nsslapd-sizelimit: 2000 3.1.1.165. nsslapd-snmp-index This parameter controls the SNMP index number of the Directory Server instance. If you have multiple Directory Server instances on the same host listening all on port 389 but on different network interfaces, this parameter allows you to set different SNMP index numbers for each instance. Parameter Description Entry DN cn=config Valid Values 0 to the maximum 32 bit integer value (2147483647) Default Value 0 Syntax Integer Example nsslapd-snmp-index: 0 3.1.1.166. nsslapd-SSLclientAuth Note The nsslapd-SSLclientAuth parameter will be deprecated in a future release and is currently maintained for backward compatibility. Use the new parameter nsSSLClientAuth , stored under cn=encryption,cn=config , instead. See Section 3.1.4.5, "nsSSLClientAuth" . 3.1.1.167. nsslapd-ssl-check-hostname (Verify Hostname for Outbound Connections) This attribute sets whether an TLS-enabled Directory Server should verify authenticity of a request by matching the host name against the value assigned to the common name ( cn ) attribute of the subject name ( subjectDN field) in the certificate being presented. By default, the attribute is set to on . If it is on and if the host name does not match the cn attribute of the certificate, appropriate error and audit messages are logged. For example, in a replicated environment, messages similar to the following are logged in the supplier server's log files if it finds that the peer server's host name does not match the name specified in its certificate: Red Hat recommends turning this attribute on to protect Directory Server's outbound TLS connections against a man in the middle (MITM) attack. Note DNS and reverse DNS must be set up correctly in order for this to work; otherwise, the server cannot resolve the peer IP address to the host name in the subject DN in the certificate. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-ssl-check-hostname: on 3.1.1.168. nsslapd-syntaxcheck This attribute validates all modifications to entry attributes to make sure that the new or changed values conform to the required syntax for that attribute type. Any changes which do not conform to the proper syntax are rejected, when this attribute is enabled. All attribute values are validated against the syntax definitions in RFC 4514 . By default, this is turned on. Syntax validation is only run against new or modified attributes; it does not validate the syntax of existing attribute values. Syntax validation is triggered for LDAP operations such as adds and modifies; it does not happen after operations like replication, since the validity of the attribute syntax should be checked on the originating supplier. This validates all supported attribute types for Directory Server, with the exception of binary syntaxes (which cannot be verified) and non-standard syntaxes, which do not have a defined required format. The unvalidated syntaxes are as follows: Fax (binary) OctetString (binary) JPEG (binary) Binary (non-standard) Space Insensitive String (non-standard) URI (non-standard) The nsslapd-syntaxcheck attribute sets whether to validate and reject attribute modifications. This can be used with the Section 3.1.1.169, "nsslapd-syntaxlogging" attribute to write warning messages about invalid attribute values to the error logs. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nnsslapd-syntaxcheck: on 3.1.1.169. nsslapd-syntaxlogging This attribute sets whether to log syntax validation failures to the errors log. By default, this is turned off. If the Section 3.1.1.168, "nsslapd-syntaxcheck" attribute is enabled (the default) and the nsslapd-syntaxlogging attribute is also enabled, then any invalid attribute change is rejected and written to the errors log. If only nsslapd-syntaxlogging is enabled and nsslapd-syntaxcheck is disabled, then invalid changes are allowed to proceed, but a warning message is written to the error log. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nnsslapd-syntaxlogging: off 3.1.1.170. nsslapd-threadnumber (Thread Number) This performance tuning-related value sets the number of threads, Directory Server creates at startup. If the value is set to -1 (default), Directory Server enables the optimized auto-tuning based on the available hardware. Note that if auto-tuning is enabled, the nsslapd-threadnumber shows the auto-generated number of threads while Directory Server is running. Note Red Hat recommends to use the auto-tuning setting for optimized performance. For further details, see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . Parameter Description Entry DN cn=config Valid Range -1 to the maximum number of threads supported by the system's thread and processor. limits Default Value -1 Syntax Integer Example nsslapd-threadnumber: -1 3.1.1.171. nsslapd-timelimit (Time Limit) This attribute sets the maximum number of seconds allocated for a search request. If this limit is reached, Directory Server returns any entries it has located that match the search request, as well as an exceeded time limit error. When no limit is set, ns-slapd returns every matching entry to the client regardless of the time it takes. To set a no limit value whereby Directory Server waits indefinitely for the search to complete, specify a value of -1 for this attribute in the dse.ldif file. A value of zero ( 0 ) causes no time to be allowed for searches. The smallest time limit is 1 second. Note A value of -1 on this attribute in the dse.ldif is the same as leaving the attribute blank in the server console in that it causes no limit to be used. However, a negative integer cannot be set in this field in the server console, and a null value cannot be used in the dse.ldif entry, as it is not a valid integer. The corresponding user-level attribute is nsTimeLimit . Parameter Description Entry DN cn=config Valid Range -1 to the maximum 32 bit integer value (2147483647) in seconds Default Value 3600 Syntax Integer Example nsslapd-timelimit: 3600 3.1.1.172. nsslapd-tmpdir This is the absolute path of the directory the server uses for temporary files. The directory must be owned by the server user ID and the user must have read and write access. No other user ID should have read or write acces to the directory. The default value is /tmp . Changes made to this attribute will not take effect until the server is restarted. 3.1.1.173. nsslapd-unhashed-pw-switch When you update the userPassword attribute, Directory Server encrypts the password and stores it in userPassword . However, in certain situations, for example, when synchronizing passwords with Active Directory (AD), Directory Server must pass the unencrypted password to a plug-in. In this case, the server stores the unencrypted password in the temporary unhashed#user#password attribute in the so-called entry extension and, depending on the scenario, also in the changelog. Note that Directory Server does not store the temporary unhashed#user#password attribute on the server's hard disk. The nsslapd-unhashed-pw-switch parameter controls whether and how Directory Server stores the unencrypted password. For example, you must set nsslapd-unhashed-pw-switch to on to synchronize passwords from Directory Server to Active Directory. You can set the parameter to one of the following values: off : Directory Server neither stores the unencrypted password in the entry extension nor in the changelog. Set this value if you do not use password synchronization with AD or any plug-ins that requires access to the unencrypted password. on : Directory Server stores the unencrypted password in the entry extension and in the changelog. Set this value if you configure password synchronization with AD. nolog : Directory Server stores the unencrypted password only in the entry extension but not in the changelog. Set this value if local Directory Server plug-ins require access to the unencrypted password, but no password synchronization with AD is configured. Parameter Description Entry DN cn=config Valid Values off | on | nolog Default Value off Syntax DirectoryString Example nsslapd-unhashed-pw-switch: off 3.1.1.174. nsslapd-validate-cert If the Directory Server is configured to run in TLS and its certificate expires, then the Directory Server cannot be started. The nsslapd-validate-cert parameter sets how the Directory Server should respond when it attempts to start with an expired certificate: warn allows the Directory Server to start successfully with an expired certificate, but it sends a warning message that the certificate has expired. This is the default setting. on validates the certificate and will prevent the server from restarting if the certificate is expired. This sets a hard failure for expired certificates. off disables all certificate expiration validation, so the server can start with an expired certificate without logging a warning. Parameter Description Entry DN cn=config Valid Values warn | on | off Default Value warn Syntax DirectoryString Example nsslapd-validate-cert: warn 3.1.1.175. nsslapd-verify-filter-schema The nsslapd-verify-filter-schema parameter defines how Directory Server verifies search filters with attributes that are not specified in the schema. You can set nsslapd-verify-filter-schema to one of the following options: reject-invalid : Directory Server rejects the filter with an error if it contains any unknown element. process-safe : Directory Server replaces unknown components with an empty set, and logs a warning with the notes=F flag in the /var/log/dirsrv/slapd- instance_name /access log file. Before you switch nsslapd-verify-filter-schema from warn-invalid or off to process-safe , monitor the access log and fix queries from applications that cause log entries with notes=F flag. Otherwise, the operation result changes and Directory Server might not return all the matching entries. warn-invalid : Directory Server logs a warning with the notes=F flag in the /var/log/dirsrv/slapd- instance_name /access log file, and continues scanning the full database. off : Directory Server does not verify filters. Note that, for example, if you set nsslapd-verify-filter-schema to warn-invalid or off , a filter, such as (&(non_exististent_attribute=example)(uid= user_name )) evaluates the uid= user_name entry and returns it only if it contains contains non_exististent_attribute=example . If you set nsslapd-verify-filter-schema to process-safe , Directory Server does not evaluate that entry and does not return it. Note Setting nsslapd-verify-filter-schema to reject-invalid or process-safe can prevent high load due to unindexed searches for attributes that are not specified in the schema. Parameter Description Entry DN cn=config Valid Values reject-invalid, process-safe, warn-invalid, off Default Value warn-invalid Syntax DirectoryString Example nsslapd-verify-filter-schema: warn-invalid 3.1.1.176. nsslapd-versionstring This attribute sets the server version number. The build data is automatically appended when the version string is displayed. Parameter Description Entry DN cn=config Valid Values Any valid server version number. Default Value Syntax DirectoryString Example nsslapd-versionstring: Red Hat-Directory/11.3 3.1.1.177. nsslapd-workingdir This is the absolute path of the directory that the server uses as its current working directory after startup. This is the value that the server would return as the value of the getcwd() function, and the value that the system process table shows as its current working directory. This is the directory a core file is generated in. The server user ID must have read and write access to the directory, and no other user ID should have read or write access to it. The default value for this attribute is the same directory containing the error log, which is usually /var/log/dirsrv/slapd- instance . Changes made to this attribute will not take effect until the server is restarted. 3.1.1.178. passwordAllowChangeTime This attribute specifies the length of time that must pass before the user is allowed to change his password. For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values Any integer Default Value Syntax DirectoryString Example passwordAllowChangeTime: 5h 3.1.1.179. passwordChange (Password Change) Indicates whether users may change their passwords. This can be abbreviated to pwdAllowUserChange . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example passwordChange: on 3.1.1.180. passwordCheckSyntax (Check Password Syntax) This attribute sets whether the password syntax is checked before the password is saved. The password syntax checking mechanism checks that the password meets or exceeds the password minimum length requirement and that the string does not contain any trivial words, such as the user's name or user ID or any attribute value stored in the uid , cn , sn , givenName , ou , or mail attributes of the user's directory entry. Password syntax includes several different categories for checking: The length of string or tokens to use to compare when checking for trivial words in the password (for example, if the token length is three, then no string of three sequential characters in the user's UID, name, email address, or other parameters can be used in the password) Minimum number of number characters (0-9) Minimum number of uppercase ASCII alphabetic characters Minimum number of lowercase ASCII alphabetic characters Minimum number of special ASCII characters, such as !@#USD Minimum number of 8-bit characters Minimum number of character categories required per password; a category can be upper- or lower-case letters, special characters, digits, or 8-bit characters This can be abbreviated to pwdCheckSyntax . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordCheckSyntax: off 3.1.1.181. passwordDictCheck If set to on , the passwordDictCheck parameter checks the password against the CrackLib dictionary. Directory Server rejects the password if the new password contains a dictionary word. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordDictCheck: off 3.1.1.182. passwordExp (Password Expiration) Indicates whether user passwords expire after a given number of seconds. By default, user passwords do not expire. Once password expiration is enabled, set the number of seconds after which the password expires using the passwordMaxAge attribute. For more information on password policies, see the "Managing User Accounts" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordExp: on 3.1.1.183. passwordExpirationTime This attribute specifies the length of time that passes before the user's password expires. Parameter Description Entry DN cn=config Valid Values Any date, in integers Default Value none Syntax GeneralizedTime Example passwordExpirationTime: 202009011953 3.1.1.184. passwordExpWarned This attribute indicates that a password expiration warning has been sent to the user. Parameter Description Entry DN cn=config Valid Values true | false Default Value none Syntax DirectoryString Example passwordExpWarned: true 3.1.1.185. passwordGraceLimit (Password Expiration) This attribute is only applicable if password expiration is enabled. After the user's password has expired, the server allows the user to connect for the purpose of changing the password. This is called a grace login . The server allows only a certain number of attempts before completely locking out the user. This attribute is the number of grace logins allowed. A value of 0 means the server does not allow grace logins. Parameter Description Entry DN cn=config Valid Values 0 (off) to any reasonable integer Default Value 0 Syntax Integer Example passwordGraceLimit: 3 3.1.1.186. passwordHistory (Password History) Enables password history. Password history refers to whether users are allowed to reuse passwords. By default, password history is disabled, and users can reuse passwords. If this attribute is set to on , the directory stores a given number of old passwords and prevents users from reusing any of the stored passwords. Set the number of old passwords the Directory Server stores using the passwordInHistory attribute. For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordHistory: on 3.1.1.187. passwordInHistory (Number of Passwords to Remember) Indicates the number of passwords the Directory Server stores in history. Passwords that are stored in history cannot be reused by users. By default, the password history feature is disabled, meaning that the Directory Server does not store any old passwords, and so users can reuse passwords. Enable password history using the passwordHistory attribute. To prevent users from rapidly cycling through the number of passwords that are tracked, use the passwordMinAge attribute. This can be abbreviated to pwdInHistory . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 1 to 24 passwords Default Value 6 Syntax Integer Example passwordInHistory: 7 3.1.1.188. passwordIsGlobalPolicy (Password Policy and Replication) This attribute controls whether password policy attributes are replicated. Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordIsGlobalPolicy: off 3.1.1.189. passwordLegacyPolicy Enables legacy password behavior. Older LDAP clients expected to receive an error to lock a user account once the maximum failure limit was exceeded . For example, if the limit were three failures, then the account was locked at the fourth failed attempt. Newer clients, however, expect to receive the error message when the failure limit is reached. For example, if the limit is three failures, then the account should be locked at the third failed attempt. Because locking the account when the failure limit is exceeded is the older behavior, it is considered legacy behavior. It is enabled by default, but can be disabled to allow the new LDAP clients to receive the error at the expected time. Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example passwordLegacyPolicy: on 3.1.1.190. passwordLockout (Account Lockout) Indicates whether users are locked out of the directory after a given number of failed bind attempts. By default, users are not locked out of the directory after a series of failed bind attempts. If account lockout is enabled, set the number of failed bind attempts after which the user is locked out using the passwordMaxFailure attribute. This can be abbreviated to pwdLockOut . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordLockout: off 3.1.1.191. passwordLockoutDuration (Lockout Duration) Indicates the amount of time in seconds during which users are locked out of the directory after an account lockout. The account lockout feature protects against hackers who try to break into the directory by repeatedly trying to guess a user's password. Enable and disable the account lockout feature using the passwordLockout attribute. This can be abbreviated to pwdLockoutDuration . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) in seconds Default Value 3600 Syntax Integer Example passwordLockoutDuration: 3600 3.1.1.192. passwordMaxAge (Password Maximum Age) Indicates the number of seconds after which user passwords expire. To use this attribute, password expiration has to be enabled using the passwordExp attribute. This can be abbreviated to pwdMaxAge . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) in seconds Default Value 8640000 (100 days) Syntax Integer Example passwordMaxAge: 100 3.1.1.193. passwordBadWords The passwordBadWords parameter defines a comma-separated list of strings that users are not allowed to use in a password. Note that Directory Server does a case-insensitive comparison of the strings. Parameter Description Entry DN cn=config Valid Values Any string Default Value "" Syntax DirectoryString Example passwordBadWords: example 3.1.1.194. passwordMaxClassChars If you set the passwordMaxClassChars parameter to a value higher than 0 , Directory Server prevents setting a password that has more consecutive characters from the same category than the value set in the parameter. If enabled, Directory Server checks for consecutive characters of the following categories: digits alpha characters lower case upper case For example, if you set passwordMaxClassChars to 3 , passwords containing, for example, jdif or 1947 are not allowed. Parameter Description Entry DN cn=config Valid Range 0 (disabled) to maximum 32-bit integer (2147483647) Default Value 0 Syntax Integer Example passwordMaxClassChars: 0 3.1.1.195. passwordMaxFailure (Maximum Password Failures) Indicates the number of failed bind attempts after which a user is locked out of the directory. By default, account lockout is disabled. Enable account lockout by modifying the passwordLockout attribute. This can be abbreviated to pwdMaxFailure . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 1 to maximum integer bind failures Default Value 3 Syntax Integer Example passwordMaxFailure: 3 3.1.1.196. passwordMaxRepeats (Password Syntax) Maximum number of times the same character can appear sequentially in the password. Zero ( 0 ) is off. Integer values reject any password which used a character more than that number of times; for example, 1 rejects characters that are used more than once ( aa ) and 2 rejects characters used more than twice ( aaa ). Parameter Description Entry DN cn=config Valid Range 0 to 64 Default Value 0 Syntax Integer Example passwordMaxRepeats: 1 3.1.1.197. passwordMaxSeqSets If you set the passwordMaxSeqSets parameter to a value higher than 0 , Directory Server rejects passwords with duplicate monotonic sequences exceeding the length set in the parameter. For example, if you set passwordMaxSeqSets to 2 , setting the password to azXYZ_XYZ-g is not allowed, because XYZ appears twice in the password. Parameter Description Entry DN cn=config Valid Range 0 (disabled) to the maximum 32 bit integer value (2147483647) Default Value 0 Syntax Integer Example passwordMaxSeqSets: 0 3.1.1.198. passwordMaxSequence If you set the passwordMaxSequence parameter to a value higher than 0 , Directory Server rejects new passwords with a monotonic sequence longer than the value set in passwordMaxSequence . For example, if you set the parameter to 3 , Directory Server rejects passwords containing strings such as 1234 or dcba . Parameter Description Entry DN cn=config Valid Range 0 (disabled) to the maximum 32 bit integer value (2147483647) Default Value 0 Syntax Integer Example passwordMaxSequence: 0 3.1.1.199. passwordMin8Bit (Password Syntax) This sets the minimum number of 8-bit characters the password must contain. Note The 7-bit checking for userPassword must be disabled to use this. Parameter Description Entry DN cn=config Valid Range 0 to 64 Default Value 0 Syntax Integer Example passwordMin8Bit: 0 3.1.1.200. passwordMinAge (Password Minimum Age) Indicates the number of seconds that must pass before a user can change their password. Use this attribute in conjunction with the passwordInHistory (number of passwords to remember) attribute to prevent users from quickly cycling through passwords so that they can use their old password again. A value of zero ( 0 ) means that the user can change the password immediately. This can be abbreviated to pwdMaxFailure . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 0 to valid maximum integer Default Value 0 Syntax Integer Example passwordMinAge: 150 3.1.1.201. passwordMinAlphas (Password Syntax) This attribute sets the minimum number of alphabetic characters password must contain. Parameter Description Entry DN cn=config Valid Range 0 to 64 Default Value 0 Syntax Integer Example passwordMinAlphas: 4 3.1.1.202. passwordMinCategories (Password Syntax) This sets the minimum number of character categories that are represented in the password. The categories are: Lowercase alphabetic characters Uppercase alphabetic characters Numbers Special ASCII charactes, such as USD and punctuation marks 8-bit characters For example, if the value of this attribute were set to 2 , and the user tried to change the password to aaaaa , the server would reject the password because it contains only lower case characters, and therefore contains characters from only one category. A password of aAaAaA would pass because it contains characters from two categories, uppercase and lowercase. The default is 3 , which means that if password syntax checking is enabled, valid passwords have to have three categories of characters. Parameter Description Entry DN cn=config Valid Range 0 to 5 Default Value 0 Syntax Integer Example passwordMinCategories: 2 3.1.1.203. PasswordMinDigits (Password Syntax) This sets the minimum number of digits a password must contain. Parameter Description Entry DN cn=config Valid Range 0 to 64 Default Value 0 Syntax Integer Example passwordMinDigits: 3 3.1.1.204. passwordMinLength (Password Minimum Length) This attribute specifies the minimum number of characters that must be used in Directory Server user password attributes. In general, shorter passwords are easier to crack. Directory Server enforces a minimum password of eight characters. This is long enough to be difficult to crack but short enough that users can remember the password without writing it down. This can be abbreviated to pwdMinLength . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 2 to 512 characters Default Value 8 Syntax Integer Example passwordMinLength: 8 3.1.1.205. PasswordMinLowers (Password Syntax) This attribute sets the minimum number of lower case letters password must contain. Parameter Description Entry DN cn=config Valid Range 0 to 64 Default Value 0 Syntax Integer Example passwordMinLowers: 1 3.1.1.206. PasswordMinSpecials (Password Syntax) This attribute sets the minimum number of special , or not alphanumeric, characters a password must contain. Parameter Description Entry DN cn=config Valid Range 0 to 64 Default Value 0 Syntax Integer Example passwordMinSpecials: 1 3.1.1.207. PasswordMinTokenLength (Password Syntax) This attribute sets the smallest attribute value length that is used for trivial words checking. For example, if the PasswordMinTokenLength is set to 3 , then a givenName of DJ does not result in a policy that rejects DJ from being in the password, but the policy rejects a password comtaining the givenName of Bob . Directory Server checks the minimum token length against values in the following attributes: uid cn sn givenName mail ou If Directory Server should check additional attributes, you can set them in the passwordUserAttributes parameter. For details, see Section 3.1.1.212, "passwordUserAttributes" . Parameter Description Entry DN cn=config Valid Range 1 to 64 Default Value 3 Syntax Integer Example passwordMinTokenLength: 3 3.1.1.208. PasswordMinUppers (Password Syntax) This sets the minimum number of uppercase letters password must contain. Parameter Description Entry DN cn=config Valid Range 0 to 64 Default Value 0 Syntax Integer Example passwordMinUppers: 2 3.1.1.209. passwordMustChange (Password Must Change) Indicates whether users must change their passwords when they first bind to the Directory Server or when the password has been reset by the Manager DN. This can be abbreviated to pwdMustChange . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordMustChange: off 3.1.1.210. passwordPalindrome If you enable the passwordPalindrome parameter, Directory Server rejects a password if the new password contains a palindrome. A palindrome is a string which reads the same forward as backward, such as abc11cba . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordPalindrome: off 3.1.1.211. passwordResetFailureCount (Reset Password Failure Count After) Indicates the amount of time in seconds after which the password failure counter resets. Each time an invalid password is sent from the user's account, the password failure counter is incremented. If the passwordLockout attribute is set to on , users are locked out of the directory when the counter reaches the number of failures specified by the passwordMaxFailure attribute (within 600 seconds by default). After the amount of time specified by the passwordLockoutDuration attribute, the failure counter is reset to zero ( 0 ). This can be abbreviated to pwdFailureCountInterval . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) in seconds Default Value 600 Syntax Integer Example passwordResetFailureCount: 600 3.1.1.212. passwordUserAttributes By default, if you set a minimum token length in the passwordMinTokenLength parameter, Directory Server checks the tokens only against certain attributes. For details, see Section 3.1.1.207, "PasswordMinTokenLength (Password Syntax)" . The passwordUserAttributes parameter enables you to set a comma-separated list of additional attributes that Directory Server should check. Parameter Description Entry DN cn=config Valid Values Any string Default Value "" Syntax DirectoryString Example passwordUserAttributes: telephoneNumber, l 3.1.1.213. passwordSendExpiringTime When a client requests the password expiring control, Directory Server returns the "time to expire" value only if the password is within the warning period. To provide compatibility with existing clients that always expect this value to be returned - regardless if the password expiration time is within the warning period - the passwordSendExpiringTime parameter can be set to on . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordSendExpiringTime: off 3.1.1.214. passwordStorageScheme (Password Storage Scheme) This attribute sets the method used to encrypt user passwords stored in userPassword attributes. For further details, such as recommended strong password storage schemes, see Section 4.1.43, "Password Storage Schemes" . Note Red Hat recommends not setting this attribute. I the value is not set, Directory Server automatically uses the strongest supported password storage scheme available. If a future Directory Server update changes the default value to increase security, passwords will be automatically encrypted using the new storage scheme if a user set a passwords. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Values See Section 4.1.43, "Password Storage Schemes" . Default Value PBKDF2_SHA256 Syntax DirectoryString Example passwordStorageScheme: PBKDF2_SHA256 3.1.1.215. passwordTPRDelayExpireAt The passwordTPRDelayExpireAt attribute is part of the password policy. After the administrator sets a temporary password to a user account, passwordTPRDelayExpireAt defines the time in seconds before the temporary password expires. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Values -1 (disabled) to the maximum 32 bit integer value (2147483647) Default Value -1 Syntax Integer Example passwordTPRDelayExpireAt: 3600 3.1.1.216. passwordTPRDelayValidFrom The passwordTPRDelayValidFrom attribute is part of the password policy. After the administrator sets a temporary password to a user account, passwordTPRDelayValidFrom defines the time in seconds before a temporary password can be used. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Values -1 (disabled) to the maximum 32 bit integer value (2147483647) Default Value -1 Syntax Integer Example passwordTPRDelayValidFrom: 60 3.1.1.217. passwordTPRMaxUse The passwordTPRMaxUse attribute is part of the password policy. The attribute sets the number of times a user can authenticate successfully or not before the temporary password expires. If the authentication is successful, Directory Server only allows the user to change the password before other operations are possible. If the user does not change the password, the operation is terminated. The counter of the number of authentication attempts is increased regardless whether the authentication was successful or not. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=config Valid Values -1 (disabled) to the maximum 32 bit integer value (2147483647) Default Value -1 Syntax Integer Example passwordTPRMaxUse: 5 3.1.1.218. passwordTrackUpdateTime Sets whether to record a separate timestamp specifically for the last time that the password for an entry was changed. If this is enabled, then it adds the pwdUpdateTime operational attribute to the user account entry (separate from other update times, like modifyTime ). Using this timestamp can make it easier to synchronize password changes between different LDAP stores, such as Active Directory. For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordTrackUpdateTime: off 3.1.1.219. passwordUnlock (Unlock Account) Indicates whether users are locked out of the directory for a specified amount of time or until the administrator resets the password after an account lockout. The account lockout feature protects against hackers who try to break into the directory by repeatedly trying to guess a user's password. If this passwordUnlock attribute is set to off and the operational attribute accountUnlockTime has a value of 0 , then the account is locked indefinitely. For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Values on | off Default Value on Syntax DirectoryString Example passwordUnlock: off 3.1.1.220. passwordWarning (Send Warning) Indicates the number of seconds before a user's password is due to expire that the user receives a password expiration warning control on their LDAP operation. Depending on the LDAP client, the user may also be prompted to change their password at the time the warning is sent. This can be abbreviated to pwdExpireWarning . For more information on password policies, see the "Managing User Authentication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=config Valid Range 1 to the maximum 32 bit integer value (2147483647) in seconds Default Value 86400 (1 day) Syntax Integer Example passwordWarning: 86400 3.1.1.221. passwordAdminSkipInfoUpdate You can add a new passwordAdminSkipInfoUpdate: on/off setting under the cn=config entry to provide a fine grained control over password updates performed by password administrators. When you set this setting to on , only the password is changed and the password state attributes in the user entry are not updated. Such attributes are, for example, passwordHistory , passwordExpirationTime , passwordRetryCount , pwdReset and, passwordExpWarned . Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example passwordAdminSkipInfoUpdate: on Note The passwords administrators can not only bypass password syntax checks, but also password expiration settings that are configured in the global and local password policies and that use expiration timestamp ( passwordExpirationTime ) and must change the password ( pwdMustChange) attributes by using the setting `passwordAdminSkipInfoUpdate: on/off . 3.1.1.222. retryCountResetTime The retryCountResetTime attribute contains the date and time in UTC-format after which the passwordRetryCount attribute will be reset to 0 . Parameter Description Entry DN cn=config Valid Range Any valid time stamp in UTC format Default Value none Syntax Generalized Time Example retryCountResetTime: 20190618094419Z 3.1.2. cn=changelog5,cn=config Multi-supplier replication changelog configuration entries are stored under the cn=changelog5 entry. The cn=changelog5,cn=config entry is an instance of the extensibleObject object class. The cn=changelog5 entry must contain the following object classes: top extensibleObject Note Two different types of changelogs are maintained by Directory Server. The first type, which is stored here and referred to as the changelog , is used by multi-supplier replication; the second changelog, which is actually a plug-in and referred to as the retro changelog , is for compatibility with some legacy applications. See Section 4.1.48, "Retro Changelog Plug-in" for further information about the Retro Changelog Plug-in. 3.1.2.1. cn This required attribute sets the relative distinguished name (RDN) of a changelog entry. Parameter Description Entry DN cn=changelog5,cn=config Valid Values Any string Default Value changelog5 Syntax DirectoryString Example cn=changelog5 3.1.2.2. nsslapd-changelogcompactdb-interval The Berkeley database does not reuse free pages unless the database is explicitly compacted. The compact operation returns the unused pages to the file system and the database file size shrinks. This parameter defines the interval in seconds when the changelog database is compacted. Note that compacting the database is resource-intensive, and thus should not be done to frequently. This setting does not require a server restart to take effect. Parameter Description Entry DN cn=changelog5,cn=config Valid Values 0 (no compaction) to 2147483647 seconds Default Value 2592000 (30 days) Syntax Integer Example nsslapd-changelogcompactdb-interval: 2592000 3.1.2.3. nsslapd-changelogdir This required attribute specifies the name of the directory in which the changelog entry is created. Whenever a changelog configuration entry is created, it must contain a valid directory; otherwise, the operation is rejected. The GUI proposes by default that this entry be stored in /var/lib/dirsrv/slapd- instance /changelogdb/ . Warning If the cn=changelog5 entry is removed, the directory specified in the nsslapd-changelogdir parameter, including any subdirectories, are removed, with all of their contents. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=changelog5,cn=config Valid Values Any valid path to the directory storing the changelog Default Value None Syntax DirectoryString Example nsslapd-changelogdir: /var/lib/dirsrv/slapd- instance /changelogdb/ 3.1.2.4. nsslapd-changelogmaxage (Max Changelog Age) When synchronizing with a consumer, Directory Server stores each update in the changelog with a time stamp. The nsslapd-changelogmaxage parameter sets the maximum age of a record stored in the changelog. Older records that were successfully transferred to all replicas are removed automatically. By default, Directory Server removes records that are older than seven days. However, if you disabled nsslapd-changelogmaxage and nsslapd-changelogmaxentries parameters, Directory Server keeps all records in the changelog, and it can lead to the excessive growth of the changelog file. Note Retro changelog has its own nsslapd-changelogmaxage attribute that is described in section Retro changelog nsslapd-changelogmaxage The trim operation is executed in intervals set in the nsslapd-changelogtrim-interval parameter. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=changelog5,cn=config Valid Range 0 (meaning that entries are not removed according to their age) to maximum 32-bit integer (2147483647) Default Value 7d Syntax DirectoryString IntegerAgeID where AgeID is s (S) for seconds, m (M) for minutes, h (H) for hours, d (D) for days, and w (W) for weeks Example nsslapd-changelogmaxage: 4w 3.1.2.5. nsslapd-changelogmaxentries (Max Changelog Records) When synchronizing with a consumer, Directory Server stores each update in the changelog. The nsslapd-changelogmaxentries parameter sets the maximum number of records stored in the changelog. If the number of the oldest records, that were successfully transferred to all replicas, exceeds the nsslapd-changelogmaxentries value, Directory Server automatically removes them from the changelog. If you disabled nsslapd-changelogmaxentries and nsslapd-changelogmaxage parameters, Directory Server keeps all records in the changelog, and it can lead to the excessive growth of the changelog file. Note Directory Server does not automatically reduce the file size of the replication changelog if you set a lower value in the nsslapd-changelogmaxentries parameter. For further details, see the corresponding sections in the Red Hat Directory Administration Guide . Directory Server executes the trim operation in intervals set in the nsslapd-changelogtrim-interval parameter. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=changelog5,cn=config Valid Range 0 (meaning that the only maximum limit is the disk size) to maximum 32-bit integer (2147483647) Default Value 0 Syntax Integer Example nsslapd-changelogmaxentries: 5000 3.1.2.6. nsslapd-changelogtrim-interval (Replication Changelog Trimming Interval) Directory Server repeatedly runs a trimming process on the changelog. To change the time between two runs, update the nsslapd-changelogtrim-interval parameter and set the interval in seconds. This setting does not require a server restart to take effect. Parameter Description Entry DN cn=changelog5,cn=config Valid Range 0 to the maximum 32 bit integer value (2147483647) Default Value 300 (5 minutes) Syntax DirectoryString Example nsslapd-changelogtrim-interval: 300 3.1.2.7. nsslapd-encryptionalgorithm (Encryption Algorithm) This attribute specifies the encryption algorithm used to encrypt the changelog. To enable the changelog encryption, the server certificate must be installed on the directory server. For information on the changelog, see Section 3.1.2.3, "nsslapd-changelogdir" . The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=changelog5,cn=config Valid Range AES or 3DES Default Value None Syntax DirectoryString Example nsslapd-encryptionalgorithm: AES 3.1.2.8. nsSymmetricKey This attribute stores the internally-generated symmetric key. For information on the changelog, see Section 3.1.2.3, "nsslapd-changelogdir" . The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=changelog5,cn=config Valid Range Base 64-encoded key Default Value None Syntax DirectoryString Example None 3.1.3. Changelog Attributes The changelog attributes contain the changes logged in the changelog. 3.1.3.1. changes This attribute contains the changes made to the entry for add and modify operations in LDIF format. OID 2.16.840.1.113730.3.1.8 Syntax Binary Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.3.2. changeLog This attribute contains the distinguished name of the entry which contains the set of entries comprising the server's changelog. OID 2.16.840.1.113730.3.1.35 Syntax DN Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.3.3. changeNumber This attribute is always present. It contains an integer which uniquely identifies each change made to a directory entry. This number is related to the order in which the change occurred. The higher the number, the later the change. OID 2.16.840.1.113730.3.1.5 Syntax Integer Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.3.4. changeTime This attribute defines a time, in a YYMMDDHHMMSS format, when the entry was added. OID 2.16.840.1.113730.3.1.77 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 3.1.3.5. changeType This attribute specifies the type of LDAP operation, add , delete , modify , or modrdn . For example: OID 2.16.840.1.113730.3.1.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.3.6. deleteOldRdn In the case of modrdn operations, this attribute specifies whether the old RDN was deleted. A value of zero ( 0 ) will delete the old RDN. Any other non-zero value will keep the old RDN. (Non-zero values can be negative or positive integers.) OID 2.16.840.1.113730.3.1.10 Syntax Boolean Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.3.7. filterInfo This is used by the changelog for processing replication. OID 2.16.840.1.113730.3.1.206 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 3.1.3.8. newRdn In the case of modrdn operations, this attribute specifies the new RDN of the entry. OID 2.16.840.1.113730.3.1.9 Syntax DN Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.3.9. newSuperior In the case of modrdn operations, this attribute specifies the new parent (superior) entry for the moved entry. OID 2.16.840.1.113730.3.1.11 Syntax DN Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.3.10. targetDn This attribute contains the DN of the entry that was affected by the LDAP operation. In the case of a modrdn operation, the targetDn attribute contains the DN of the entry before it was modified or moved. OID 2.16.840.1.113730.3.1.6 Syntax DN Multi- or Single-Valued Multi-valued Defined in Changelog Internet Draft 3.1.4. cn=encryption Encryption related attributes are stored under the cn=encryption,cn=config entry. The cn=encryption,cn=config entry is an instance of the nsslapdEncryptionConfig object class. 3.1.4.1. allowWeakCipher This attribute controls whether weak ciphers are allowed or rejected. The default depends on the value set in the nsSSL3Ciphers parameter. Ciphers are considered weak, if: They are exportable. Exportable ciphers are labeled EXPORT in the cipher name. For example, in TLS_RSA_EXPORT_WITH_RC4_40_MD5 . They are symmetrical and weaker than the 3DES algorithm. Symmetrical ciphers use the same cryptographic keys for both encryption and decryption. The key length is shorter than 128 bits. The server has to be restarted for changes to this attribute to take effect. Entry DN cn=encryption,cn=config Valid Values on | off Default Value off , if the value in the nsSSL3Ciphers parameter is set to +all or default . on , if the value in the nsSSL3Ciphers parameter contains a user-specific cipher list. Syntax DirectoryString Example allowWeakCipher: on 3.1.4.2. allowWeakDHParam The network security services (NSS) libraries linked with Directory Server requires minimum of 2048-bit Diffie-Hellman (DH) parameters. However, some clients connecting to Directory Server, such as Java 1.6 and 1.7 clients, only support 1024-bit DH parameters. The allowWeakDHParam parameter allows you to enable support for weak 1024-bit DH parameters in Directory Server. The server has to be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=encryption,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example allowWeakDHParam: off 3.1.4.3. nsSSL3Ciphers This attribute specifies the set of TLS encryption ciphers Directory Server uses during encrypted communications. The value set in this parameter influences the default value of the allowWeakCipher parameter. For details, see Section 3.1.4.1, "allowWeakCipher" . Parameter Description Entry DN cn=encryption,cn=config Valid Values Comma separated list of NSS supported ciphers. Additionally, the following parameters are possible: * default: Enables the default ciphers advertised by NSS except weak ciphers. For further information, see List supported cipher suites for SSL connections . * +all: All ciphers are enabled. This includes weak ciphers, if the allowWeakCipher parameter is enabled. * -all: All ciphers are disabled. Default Value default Syntax DirectoryString Use the plus ( + ) symbol to enable or minus ( - ) symbol to disable, followed by the ciphers. Blank spaces are not allowed in the list of ciphers. To enable all ciphers - except rsa_null_md5, which must be specifically called - specify +all . Example nsSSL3Ciphers: +TLS_RSA_AES_128_SHA,+TLS_RSA_AES_256_SHA,+TLS_RSA_WITH_AES_128_GCM_SHA256,-RSA_NULL_SHA For details how to list all supported ciphers, see the corresponding section in the Red Hat Directory Server Administration Guide . 3.1.4.4. nsSSLActivation This attribute shows whether an TLS cipher family is enabled for a given security module. Entry DN cn= encryptionType ,cn=encryption,cn=config Valid Values on | off Default Value Syntax DirectoryString Example nsSSLActivation: on 3.1.4.5. nsSSLClientAuth This attribute shows how the Directory Server enforces client authentication. It accepts the following values: off - the Directory Server will not accept client authentication allowed (default) - the Directory Server will accept client authentication, but not require it required - all clients must use client authentication. Important The Directory Server Console does not support client authentication. Therefore, if the nsSSLClientAuth attribute is set to required , the Console cannot be used to manage the instance. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config Valid Values off | allowed | required Default Value allowed Syntax DirectoryString Example nsSSLClientAuth: allowed 3.1.4.6. nsSSLEnabledCiphers Directory Server generates the multi-valued nsSSLEnabledCiphers attribute automatically. The attribute is read-only and displays the ciphers Directory Server currently uses. The list might not be the same as you set in the nsSSL3Ciphers attribute. For example, if you set weak ciphers in the nsSSL3Ciphers attribute, but allowWeakCipher is disabled, the nsSSLEnabledCiphers attribute neither lists the weak ciphers nor does Directory Server use them. Parameter Description Entry DN cn=config Valid Values The values of this attribute are auto-generated and read-only. Default Value Syntax DirectoryString Example nsSSLClientAuth: TLS_RSA_WITH_AES_256_CBC_SHA::AES::SHA1::256 3.1.4.7. nsSSLPersonalitySSL This attribute contains the certificate name to use for SSL. Entry DN cn=encryption,cn=config Valid Values A certificate nickname Default Value Syntax DirectoryString Example: nsSSLPersonalitySSL: Server-Cert 3.1.4.8. nsSSLSessionTimeout This attribute sets the lifetime duration of a TLS connection. The minimum timeout value is 5 seconds. If a smaller value is set, then it is automatically replaced by 5 seconds. A value greater than the maximum value in the valid range below is replaced by the maximum value in the range. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=encryption,cn=config Valid Range 5 seconds to 24 hours Default Value 0, which means use the maximum value in the valid range above. Syntax Integer Example nsSSLSessionTimeout: 5 3.1.4.9. nsSSLSupportedCiphers This attribute contains the supported ciphers for the server. Entry DN cn=encryption,cn=config Valid Values A specific family, cipher, and strength string Default Value Syntax DirectoryString Example: nsSSLSupportedCiphers: TLS_RSA_WITH_AES_256_CBC_SHA::AES::SHA1::256 3.1.4.10. nsSSLToken This attribute contains the name of the token (security module) used by the server. Entry DN cn=encryption,cn=config Valid Values A module name Default Value Syntax DirectoryString Example: nsSSLToken: internal (software) 3.1.4.11. nsTLS1 Enables TLS version 1. The ciphers used with TLS are defined in the nsSSL3Ciphers attribute. If the sslVersionMin and sslVersionMax parameters are set in conjunction with nsTLS1 , Directory Server selects the most secure settings from these parameters. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=encryption,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsTLS1: on 3.1.4.12. nsTLSAllowClientRenegotiation Directory Server uses the SSL_OptionSet() network security services (NSS) function with the SSL_ENABLE_RENEGOTIATION option to control the TLS renegotiation behavior of NSS. The nsTLSAllowClientRenegotiation attribute controls which values Directory Server passes to the SSL_ENABLE_RENEGOTIATION option: If you set nsTLSAllowClientRenegotiation: on , Directory Server passes SSL_RENEGOTIATE_REQUIRES_XTN to the SSL_ENABLE_RENEGOTIATION option. In this case, NSS allows secure renegotiations attempts using RFC 5746 . If you set nsTLSAllowClientRenegotiation: off , Directory Server passes SSL_RENEGOTIATE_NEVER to the SSL_ENABLE_RENEGOTIATION option. In this case, NSS denies all renegotiations attempts, even secure ones. For further details about the NSS TLS renegotiation behavior, see the The RFC 5746 implementation in NSS (Network Security Services) section in the Is Red Hat affected by TLS renegotiation MITM attacks (CVE-2009-3555)? article. The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=encryption,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsTLSAllowClientRenegotiation: on 3.1.4.13. sslVersionMin The sslVersionMin parameter sets the minimum version of the TLS protocol Directory Server uses. However, by default, Directory Server sets this parameter automatically based on the system-wide crypto policy. If you set the crypto policy profile in the /etc/crypto-policies/config file to: DEFAULT , FUTURE , or FIPS , Directory Server sets sslVersionMin to TLS1.2 LEGACY , Directory Server sets sslVersionMin to TLS1.0 Alternatively, you can manually set sslVersionMin to higher value than the one defined in the crypto policy. The service must be restarted for changes to this attribute to take effect. Entry DN cn=encryption,cn=config Valid Values TLS protocol versions, such as TLS1.2 Default Value Depends on the system-wide crypto policy profile you set. Syntax DirectoryString Example: sslVersionMin: TLS1.2 3.1.4.14. sslVersionMax Sets the maximum version of the TLS protocol to be used. By default this value is set to the newest available protocol version in the NSS library installed on the system. The server has to be restarted for changes to this attribute to go into effect. If the sslVersionMin and sslVersionMax parameters are set in conjunction with nsTLS1 , Directory Server selects the most secure settings from these parameters. Entry DN cn=encryption,cn=config Valid Values TLS protocol version such as TLS1.0 Default Value Newest available protocol version in the NSS library installed on the system Syntax DirectoryString Example: sslVersionMax: TLS1.2 3.1.5. cn=features There are not attributes for the cn=features entry itself. This entry is only used as a parent container entry, with the nsContainer object class. The child entries contain an oid attribute to identify the feature and the directoryServerFeature object class, plus optional identifying information about the feature, such as specific ACLs. For example: 3.1.5.1. oid The oid attribute contains an object identifier assigned to a directory service feature. oid is used as the naming attribute for these directory features. OID 2.16.840.1.113730.3.1.215 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 3.1.6. cn=mapping tree Configuration attributes for suffixes, replication, and Windows synchronization are stored under cn=mapping tree,cn=config . Configuration attributes related to suffixes are found under the suffix subentry cn= suffix , cn=mapping tree,cn=config . For example, a suffix is the root entry in the directory tree, such as dc=example,dc=com . Replication configuration attributes are stored under cn=replica,cn= suffix , cn=mapping tree,cn=config . Replication agreement attributes are stored under cn= replicationAgreementName , cn=replica,cn= suffix ,cn=mapping tree,cn=config . Windows synchronization agreement attributes are stored under cn= syncAgreementName , cn=replica,cn= suffix ,cn=mapping tree,cn=config . 3.1.7. Suffix Configuration Attributes under cn=suffix_DN Suffix configurations are stored under the cn= "suffix_DN" ,cn=mapping tree,cn=config entry. These entries are instances of the nsMappingTree object class. The extensibleObject object class enables entries that belong to it to hold any user attribute. For suffix configuration attributes to be taken into account by the server, these object classes, in addition to the top object class, must be present in the entry. You must write the suffix DN in quotes because it contains characters such as equals signs (=), commas (,), and space characters. By using quotes, the DN appears correctly as a value in another DN. For example: cn="dc=example,dc=com",cn=mapping tree,cn=config For further details, see the corresponding section in the Directory Server Administration Guide . 3.1.7.1. cn This mandatory attribute sets the relative distinguished name (RDN) of a new suffix. Parameter Description Entry DN cn= suffix_DN ,cn=mapping tree,cn=config Valid Values Any valid LDAP DN Default Value Syntax DirectoryString Example cn: dn=example,dc=com 3.1.7.2. nsslapd-backend This parameter sets the name of the database or database link used to process requests. It is multi-valued, with one database or database link per value. This attribute is required when the value of the nsslapd-state attribute is set to backend or referral on update . Set the value to the name of the back-end database entry instance under cn=ldbm database,cn=plugins,cn=config . For example: o=userroot,cn=ldbm database,cn=plugins,cn=config Parameter Description Entry DN cn= suffix_DN ,cn=mapping tree,cn=config Valid Values Any valid partition name Default Value Syntax DirectoryString Example nsslapd-backend: userRoot 3.1.7.3. nsslapd-distribution-function The nssldap-distribution-function parameter sets the name of the custom distribution function. You must set this attribute when you set more than one database in the nsslapd-backend attribute. For further details about the custom distribution function, see the corresponding section in the Directory Server Administration Guide . Parameter Description Entry DN cn= suffix_DN ,cn=mapping tree,cn=config Valid Values Any valid distribution function Default Value Syntax DirectoryString Example nsslapd-distribution-plugin: distribution_function_name 3.1.7.4. nsslapd-distribution-plugin The nssldap-distribution-plugin sets the shared library to be used with the custom distribution function. You must set this attribute when you set more than one database in the nsslapd-backend attribute. For further details about the custom distribution function, see the corresponding section in the Directory Server Administration Guide . Parameter Description Entry DN cn= suffix_DN ,cn=mapping tree,cn=config Valid Values Any valid distribution plug-in Default Value Syntax DirectoryString Example nsslapd-distribution-plugin: /path/to/shared/library 3.1.7.5. nsslapd-parent If you want to create a sub suffix, use the nsslapd-parent attribute to define the parent suffix. If the attribute is not set, the new suffix is created as a root suffix. Parameter Description Entry DN cn= suffix_DN ,cn=mapping tree,cn=config Valid Values Any valid partition name Default Value Syntax DirectoryString Example nsslapd-parent-suffix: dc=example,dc=com 3.1.7.6. nsslapd-referral This attribute sets the LDAP URL of the referral to be returned by the suffix. You can add the nssldap-referral attribute multiple times to set multiple referral URLs. You must set this attribute if you set the nsslapd-state parameter to referral or on update . Parameter Description Entry DN cn= suffix_DN ,cn=mapping tree,cn=config Valid Values Any valid LDAP URL Default Value Syntax DirectoryString Example nssldap-referral: ldap://example.com/ 3.1.7.7. nsslapd-state This parameter determines how a suffix handles operations. The attribute takes the following values: backend : The back-end database processes all operations. disabled : The database is not available for processing operations. The server returns a No such search object error in response to requests made by client applications. referral : Directory Server returns a referral URL for requests to this suffix. referral on update : The database is used for all operations. Only for update requests is a referral sent. Parameter Description Entry DN cn= suffix_DN ,cn=mapping tree,cn=config Valid Values backend | disabled | referral | referral on update Default Value backend Syntax DirectoryString Example nsslapd-state: backend 3.1.8. Replication Attributes under cn=replica,cn=suffixDN,cn=mapping tree,cn=config Replication configuration attributes are stored under cn=replica,cn= suffix , cn=mapping tree,cn=config . The cn=replica entry is an instance of the nsDS5Replica object class. For replication configuration attributes to be taken into account by the server, this object class (in addition to the top object class) must be present in the entry. For further information about replication, see the "Managing Replication" chapter in the Red Hat Directory Server Administration Guide . The cn=replica,cn= suffix ,cn=mapping tree,cn=config entry must contain the following object classes: top extensibleObject nsds5replica 3.1.8.1. cn Sets the naming attribute for the replica. The cn attribute must be set to replica . Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values The value must be set to replica . Default Value replica Syntax DirectoryString Example cn=replica 3.1.8.2. nsds5DebugReplicaTimeout This attribute gives an alternate timeout period to use when the replication is run with debug logging. This can set only the time or both the time and the debug level: Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any numeric string Default Value Syntax DirectoryString Example nsds5debugreplicatimeout: 60:8192 3.1.8.3. nsDS5Flags This attribute sets replica properties that were previously defined in flags. At present only one flag exists, which sets whether the log changes. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values 0 | 1 * 0 : The replica does not write to the changelog; this is the default for consumers. * 1 : The replica writes to the changelog; this is the default for hubs and suppliers. Default Value 0 Syntax Integer Example nsDS5Flags: 0 3.1.8.4. nsDS5ReplConflict Although this attribute is not in the cn=replica entry, it is used in conjunction with replication. This multi-valued attribute is included on entries that have a change conflict that cannot be resolved automatically by the synchronization process. To check for replication conflicts requiring administrator intervention, perform an LDAP search for ( nsDS5ReplConflict=* ). For example: Using the search filter "(objectclass=nsTombstone)" also shows tombstone (deleted) entries. The value of the nsDS5ReplConflict contains more information about which entries are in conflict, usually by referring to them by their nsUniqueID . It is possible to search for a tombstone entry by its nsUniqueID . For example: 3.1.8.5. nsDS5ReplicaAutoReferral This attribute sets whether the Directory Server follows configured referrals for the database. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values on | off Default Value Syntax DirectoryString Example nsDS5ReplicaAutoReferral: on 3.1.8.6. nsState This attribute stores information on the state of the clock. It is designed only for internal use to ensure that the server cannot generate a change sequence number ( csn ) inferior to existing ones required for detecting backward clock errors. 3.1.8.7. nsDS5ReplicaAbortCleanRUV This read-only attribute specifies whether the background task that removes old RUV entries for obsolete or missing suppliers is being aborted. See Section 3.1.16.13, "cn=abort cleanallruv" for more information about this task. A value of 0 means that the task is inactive, and a value of 1 means that the task is active. This attribute is present to allow the abort task to be resumed after a server restart. When the task completes, the attribute is deleted. The server ignores the modify request if this value is set manually. Parameter Description Entry DN cn=replica,cn=suffixDN,cn=mapping tree,cn=config Valid Values 0 | 1 Default Value None Syntax Integer Example nsDS5ReplicaAbortCleanRUV: 1 3.1.8.8. nsds5ReplicaBackoffMin and nsds5ReplicaBackoffMax These attributes are used in environments with heavy replication traffic, where updates need to be sent as fast as possible. By default, if a remote replica is busy, the replication protocol will go into a "back off" state, and it will retry to send it updates at the interval of the back-off timer. By default, the timer starts at 3 seconds, and has a maximum wait period of 5 minutes. As these default settings maybe not be sufficient under certain circumstances, you can use nsds5ReplicaBackoffMin and nsds5ReplicaBackoffMax to configure the minimum and maximum wait times. The configuration settings can be applied while the server is online, and do not require a server restart. If invalid settings are used, then the default values are used instead. The configuration must be handled through CLI tools. 3.1.8.9. nsDS5ReplicaBindDN This multi-valued attribute specifies the DN to use when binding. Although there can be more than one value in this cn=replica entry, there can only be one supplier bind DN per replication agreement. Each value should be the DN of a local entry on the consumer server. If replication suppliers are using client certificate-based authentication to connect to the consumers, configure the certificate mapping on the consumer to map the subjectDN in the certificate to a local entry. Important For security reasons, do not set this attribute to cn=Directory Manager . Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example nsDS5ReplicaBindDN: cn=replication manager,cn=config 3.1.8.10. nsDS5ReplicaBindDNGroup The nsDS5ReplicaBindDNGroup attribute specifies a group DN. This group is then expanded and its members, including the members of its subgroups, are added to the replicaBindDNs attribute at startup or when the replica object is modified. This extends the current functionality provided by the nsDS5ReplicaBindDN attribute, as it allows to set a group DN. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid group DN Default Value Syntax DirectoryString Example nsDS5ReplicaBindDNGroup: cn=sample_group,ou=groups,dc=example,dc=com 3.1.8.11. nsDS5ReplicaBindDNGroupCheckInterval Directory Server checks for any changes in the groups specified in the nsDS5ReplicaBindDNGroup attribute and automatically rebuilds the list for the replicaBindDN parameter accordingly. These operations have a negative effect on performance and are therefore performed only at a specified interval set in the nsDS5ReplicaBindDNGroupCheckInterval attribute. This attribute accepts the following values: -1 : Disables the dynamic check at runtime. The administrator must restart the instance when the nsDS5ReplicaBindDNGroup attribute changes. 0 : Directory Server rebuilds the lists immediately after the groups are changed. Any positive 32-bit integer value: Minimum number of seconds that are required to pass since the last rebuild. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values -1 to maximum 32-bit integer (2147483647) Default Value -1 Syntax Integer Example nsDS5ReplicaBindDNGroupCheckInterval: 0 3.1.8.12. nsDS5ReplicaChangeCount This read-only attribute shows the total number of entries in the changelog and whether they still remain to be replicated. When the changelog is purged, only the entries that are still to be replicated remain. See Section 3.1.8.18, "nsDS5ReplicaPurgeDelay" and Section 3.1.8.23, "nsDS5ReplicaTombstonePurgeInterval" for more information about purge operation properties. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range -1 to maximum 32-bit integer (2147483647) Default Value Syntax Integer Example nsDS5ReplicaChangeCount: 675 3.1.8.13. nsDS5ReplicaCleanRUV This read-only attribute specifies whether the background task that removes old RUV entries for obsolete or missing suppliers is active. See Section 3.1.16.12, "cn=cleanallruv" for more information about this task. A value of 0 means that the task is inactive, and a value of 1 means that the task is active. This attribute is present to allow the cleanup task to be resumed after a server restart. When the task completes, the attribute is deleted. The server ignores the modify request if this value is set manually. Parameter Description Entry DN cn=replica,cn=suffixDN,cn=mapping tree,cn=config Valid Values 0 | 1 Default Value None Syntax Integer Example nsDS5ReplicaCleanRUV: 0 3.1.8.14. nsDS5ReplicaId This attribute sets the unique ID for suppliers in a given replication environment. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range For suppliers: 1 to 65534 For consumers and hubs: 65535 Default Value Syntax Integer Example nsDS5ReplicaId: 1 3.1.8.15. nsDS5ReplicaLegacyConsumer If this attribute is absent or has a value of false , then it means that the replica is not a legacy consumer. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values true | false Default Value false Syntax DirectoryString Example nsDS5ReplicaLegacyConsumer: false 3.1.8.16. nsDS5ReplicaName This attribute specifies the name of the replica with a unique identifier for internal operations. If it is not specified, this unique identifier is allocated by the server when the replica is created. Note It is recommended that the server be permitted to generate this name. However, in certain circumstances, for example, in replica role changes (supplier to hub etc.), this value needs to be specified. Otherwise, the server will not use the correct changelog database, and replication fails. This attribute is destined for internal use only. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Default Value Syntax DirectoryString (a UID identifies the replica) Example nsDS5ReplicaName: 66a2b699-1dd211b2-807fa9c3-a58714648 3.1.8.17. nsds5ReplicaProtocolTimeout When stopping the server, disabling replication, or removing a replication agreement, there is a timeout on how long to wait before stopping replication when the server is under load. The nsds5ReplicaProtocolTimeout attribute can be used to configure this timeout and its default value is 120 seconds. There may be scenarios where a timeout of 2 minutes is too long, or not long enough. For example, a particular replication agreement may need more time before ending a replication session during a shutdown. This attribute can be added to the main replication configuration entry for a back end: Parameter Description Entry DN cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) in seconds Default value 120 Syntax Integer Example nsds5ReplicaProtocolTimeout: 120 The nsds5ReplicaProtocolTimeout attribute can also be added to a replication agreement. The replication agreement protocol timeout overrides the timeout set in the main replica configuration entry. This allows different timeouts for different replication agreements. If a replication session is in progress, a new timeout will abort that session and allow the server to shutdown. 3.1.8.18. nsDS5ReplicaPurgeDelay This attribute controls the maximum age of deleted entries (tombstone entries) and state information. The Directory Server stores tombstone entries and state information so that when a conflict occurs in a multi-supplier replication process, the server resolves the conflicts based on the timestamp and replica ID stored in the change sequence numbers. An internal Directory Server housekeeping operation periodically removes tombstone entries which are older than the value of this attribute (in seconds). State information which is older than the nsDS5ReplicaPurgeDelay value is removed when an entry which contains the state information is modified. Not every tombstone and state information may be removed because, with multi-supplier replication, the server may need to keep a small number of the latest updates to prime replication, even if they are older than the value of the attribute. This attribute specifies the interval, in seconds, to perform internal purge operations on an entry. When setting this attribute, ensure that the purge delay is longer than the longest replication cycle in the replication policy to preserve enough information to resolve replication conflicts and to prevent the copies of data stored in different servers from diverging. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range 0 (keep forever) to maximum 32-bit integer (2147483647) Default Value 604800 [1 week (60x60x24x7)] Syntax Integer Example nsDS5ReplicaPurgeDelay: 604800 3.1.8.19. nsDS5ReplicaReapActive This read-only attribute specifies whether the background task that removes old tombstones (deleted entries) from the database is active. See Section 3.1.8.23, "nsDS5ReplicaTombstonePurgeInterval" for more information about this task. A value of 0 means that the task is inactive, and a value of 1 means that the task is active. The server ignores the modify request if this value is set manually. Parameter Description Entry DN cn=replica,cn=suffixDN,cn=mapping tree,cn=config Valid Values 0 | 1 Default Value Syntax Integer Example nsDS5ReplicaReapActive: 0 3.1.8.20. nsDS5ReplicaReferral This multi-valued attribute specifies the user-defined referrals. This should only be defined on a consumer. User referrals are only returned when a client attempts to modify data on a read-only consumer. This optional referral overrides the referral that is automatically configured by the consumer by the replication protocol. The URL can use the format ldap[s]:// host_name : port_number or ldap[s]:// IP_address : port_number , with an IPv4 or IPv6 address. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid LDAP URL Default Value Syntax DirectoryString Example nsDS5ReplicaReferral: ldap://server.example.com:389 3.1.8.21. nsDS5ReplicaReleaseTimeout This attribute, when used on suppliers and hubs in multi-supplier scenarios, determines a timeout period (in seconds) after which a supplier will release a replica. This is useful in situations when problems such as a slow network connection causes one supplier to acquire access to a replica and hold it for a long time, preventing all other suppliers from accessing it and sending updates. If this attribute is set, replicas are released by suppliers after the specified period, resulting in improved replication performance. Setting this attribute to 0 disables the timeout. Any other value determines the length of the timeout in seconds. Important Avoid setting this attribute to values between 1 and 30 . In most scenarios, short timeouts decrease the replication performance. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values 0 to maximum 32-bit integer (2147483647) in seconds Default Value 60 Syntax Integer Example nsDS5ReplicaReleaseTimeout: 60 3.1.8.22. nsDS5ReplicaRoot This attribute sets the DN at the root of a replicated area. This attribute must have the same value as the suffix of the database being replicated and cannot be modified. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Suffix of the database being replicated, which is the suffix DN Default Value Syntax DirectoryString Example nsDS5ReplicaRoot: "dc=example,dc=com" 3.1.8.23. nsDS5ReplicaTombstonePurgeInterval This attribute specifies the time interval in seconds between purge operation cycles. Periodically, the server runs an internal housekeeping operation to purge old update and state information from the changelog and the main database. See Section 3.1.8.18, "nsDS5ReplicaPurgeDelay" . When setting this attribute, remember that the purge operation is time-consuming, especially if the server handles many delete operations from clients and suppliers. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) in seconds Default Value 86400 (1 day) Syntax Integer Example nsDS5ReplicaTombstonePurgeInterval: 86400 3.1.8.24. nsDS5ReplicaType Defines the type of replication relationship that exists between this replica and the others. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values 0 | 1 | 2 | 3 * 0 means unknown * 1 means primary (not yet used) * 2 means consumer (read-only) * 3 consumer/supplier (updateable) Default Value Syntax Integer Example nsDS5ReplicaType: 2 3.1.8.25. nsds5Task This attribute launches a replication task, such as dumping the database contents to an LDIF file or removing obsolete suppliers from the replication topology. You can set the nsds5Task attribute to one of the following values: cl2ldif : Exports the changelog to an LDIF file in the /var/lib/dirsrv/slapd- instance_name /changelogdb/ directory. ldif2cl : Imports the changelog from an LDIF file stored in the /var/lib/dirsrv/slapd- instance_name /changelogdb/ directory. cleanruv : Removes a Replica Update Vector (RUV) from the suppliers where you run the operation. cleanallruv : Removes RUVs from all servers in a replication topology. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values * cl2ldif * ldif2cl * cleanruv * cleanallruv Default Value Syntax DirectoryString Example nsds5Task: cleanallruv 3.1.9. Replication Attributes under cn=ReplicationAgreementName,cn=replica,cn=suffixName,cn=mapping tree,cn=config The replication attributes that concern the replication agreement are stored under cn= ReplicationAgreementName , cn=replica,cn= suffixDN , cn=mapping tree,cn=config . The cn= ReplicationAgreementName entry is an instance of the nsDS5ReplicationAgreement object class. Replication agreements are configured only on supplier replicas. 3.1.9.1. cn This attribute is used for naming. Once this attribute has been set, it cannot be modified. This attribute is required for setting up a replication agreement. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid cn Default Value Syntax DirectoryString Example cn: SupplierAtoSupplierB 3.1.9.2. description Free form text description of the replication agreement. This attribute can be modified. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any string Default Value Syntax DirectoryString Example description: Replication Agreement between Server A and Server B. 3.1.9.3. nsDS5ReplicaBindDN This attribute sets the DN to use when binding to the consumer during replication. The value of this attribute must be the same as the one in cn=replica on the consumer replica. This may be empty if certificate-based authentication is used, in which case the DN used is the subject DN of the certificate, and the consumer must have appropriate client certificate mapping enabled. This can also be modified. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid DN (can be empty if client certificates are used) Default Value Syntax DirectoryString Example nsDS5ReplicaBindDN: cn=replication manager,cn=config 3.1.9.4. nsDS5ReplicaBindMethod This attribute sets the method for the server to use to bind to the consumer server. The nsDS5ReplicaBindMethod supports the following values: Empty or SIMPLE : The server uses password-based authentication. When using this bind method, additionally, set the nsds5ReplicaBindDN and nsds5ReplicaCredentials parameters to provide a user name and password. SSLCLIENTAUTH : Enables certificate-based authentication between the supplier and consumer. For this, the consumer server must have a certificate mapping configured to map the supplier's certificate to the replication manager entry. SASL/GSSAPI : Enables Kerberos authentication using SASL. This requires that the supplier server have a Kerberos keytab, and the consumer server a SASL mapping entry configured to map the supplier's Kerberos principal to the replication manager entry. For further details, see the following sections in the Red Hat Directory Server Administration Guide : About the KDC Server and Keytabs Configuring SASL Identity Mapping from the Console SASL/DIGEST-MD5 : Enables password-based authentication using SASL with the DIGEST-MD5 mechanism. When using this bind method, additionally, set the nsds5ReplicaBindDN and nsds5ReplicaCredentials parameters to provide a user name and password. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values SIMPLE | SSLCLIENTAUTH | SASL/GSSAPI | SASL/DIGEST Default Value SIMPLE Syntax DirectoryString Example nsDS5ReplicaBindMethod: SIMPLE 3.1.9.5. nsds5ReplicaBootstrapBindDN The nsds5ReplicaBootstrapBindDN parameter sets the fall-back bind distinguished name (DN) that Directory Server uses when the supplier fails to bind to a consumer due to an LDAP_INVALID_CREDENTIALS (err=49) , LDAP_INAPPROPRIATE_AUTH (err=48) , or LDAP_NO_SUCH_OBJECT (err=32) error. In these cases, Directory Server uses the information from the nsds5ReplicaBootstrapBindDN , nsds5ReplicaBootstrapCredentials , nsds5ReplicaBootstrapBindMethod , and nsds5ReplicaBootstrapTransportInfo parameters to establish the connection. If the server also fails to establish the connection using these bootstrap settings, the server stops trying to connect. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example nsds5ReplicaBootstrapBindDN: cn=replication manager,cn=config 3.1.9.6. nsds5ReplicaBootstrapBindMethod The nsds5ReplicaBootstrapBindMethod parameter sets the password for the fall-back login mechanism that Directory Server uses when the supplier fails to bind to a consumer due to an LDAP_INVALID_CREDENTIALS (err=49) , LDAP_INAPPROPRIATE_AUTH (err=48) , or LDAP_NO_SUCH_OBJECT (err=32) error. In these cases, Directory Server uses the information from the nsds5ReplicaBootstrapBindDN , nsds5ReplicaBootstrapCredentials , nsds5ReplicaBootstrapBindMethod , and nsds5ReplicaBootstrapTransportInfo parameters to establish the connection. If the server also fails to establish the connection using these bootstrap settings, the server stops trying to connect. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values SIMPLE | SSLCLIENTAUTH | SASL/GSSAPI | SASL/DIGEST Default Value Syntax DirectoryString Example nsds5ReplicaBootstrapBindMethod: SIMPLE 3.1.9.7. nsds5ReplicaBootstrapCredentials The nsds5ReplicaBootstrapCredentials parameter sets the password for the fall-back bind distinguished name (DN) that Directory Server uses when the supplier fails to bind to a consumer due to an LDAP_INVALID_CREDENTIALS (err=49) , LDAP_INAPPROPRIATE_AUTH (err=48) , or LDAP_NO_SUCH_OBJECT (err=32) error. In these cases, Directory Server uses the information from the nsds5ReplicaBootstrapBindDN , nsds5ReplicaBootstrapCredentials , nsds5ReplicaBootstrapBindMethod , and nsds5ReplicaBootstrapTransportInfo parameters to establish the connection. If the server also fails to establish the connection using these bootstrap settings, the server stops trying to connect. Directory Server automatically hashes the password using the AES reversible password encryption algorithm when you set the parameter in clear text. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid string. Default Value Syntax DirectoryString Example nsds5ReplicaBootstrapCredentials: password 3.1.9.8. nsds5ReplicaBootstrapTransportInfo The nsds5ReplicaBootstrapTransportInfo parameter sets the encryption method for the connection to and from the replica for the fall-back connection that Directory Server uses when the supplier fails to bind to a consumer due to an LDAP_INVALID_CREDENTIALS (err=49) , LDAP_INAPPROPRIATE_AUTH (err=48) , or LDAP_NO_SUCH_OBJECT (err=32) error. In these cases, Directory Server uses the information from the nsds5ReplicaBootstrapBindDN , nsds5ReplicaBootstrapCredentials , nsds5ReplicaBootstrapBindMethod , and nsds5ReplicaBootstrapTransportInfo parameters to establish the connection. If the server also fails to establish the connection using these bootstrap settings, the server stops trying to connect. The attribute takes the following values: TLS : The connection uses the StartTLS command to start the encryption. SSL : The connection uses LDAPS with TLS encryption. LDAP : The connection is not encrypted. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values TLS | SSL | LDAP Default Value Syntax DirectoryString Example nsds5ReplicaBootstrapTransportInfo: SSL 3.1.9.9. nsDS5ReplicaBusyWaitTime This attribute sets the amount of time in seconds a supplier should wait after a consumer sends back a busy response before making another attempt to acquire access. The default value is three (3) seconds. If the attribute is set to a negative value, Directory Server sends the client a message and an LDAP_UNWILLING_TO_PERFORM error code. The nsDS5ReplicaBusyWaitTime attribute works in conjunction with the nsDS5ReplicaSessionPauseTime attribute. The two attributes are designed so that the nsDS5ReplicaSessionPauseTime interval is always at least one second longer than the interval specified for nsDS5ReplicaBusyWaitTime . The longer interval gives waiting suppliers a better chance to gain consumer access before the supplier can re-access the consumer. Set the nsDS5ReplicaBusyWaitTime attribute at any time by using changetype:modify with the replace operation. The change takes effect for the update session if one is already in progress. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid integer Default Value 3 Syntax Integer Example nsDS5ReplicaBusyWaitTime: 3 3.1.9.10. nsDS5ReplicaChangesSentSinceStartup This read-only attribute shows the number of changes sent to this replica since the server started. The actual value in the attribute is stored as a binary blob; in the Directory Server Console, this value is a ratio, in the form replica_id:changes_sent/changes_skipped . For example, for 100 changes sent and no changes skipped for replica 7, the attribute value is displayed in the Console as 7:100/0. In the command line, the attribute value is shown in a binary form. For example: Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) Default Value Syntax Integer Example nsds5replicaChangesSentSinceStartup:: MToxLzAg 3.1.9.11. nsDS5ReplicaCredentials This attribute sets the credentials for the bind DN specified in the nsDS5ReplicaBindDN attribute. Directory Server uses this password to connect to the consumer. The example below shows the encrypted value, as stored in the /etc/dirsrv/slapd- instance_name /dse.ldif file and not the actual password. To set a value, set it in clear text, for example nsDS5ReplicaCredentials: password . Directory Server then encrypts the password using the AES reversible password encryption schema when it stores the value. When you use certificate-based authentication, this attribute does not have a value set. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid password Default Value Syntax DirectoryString {AES- Base64-algorithm-id } encoded_password Example nsDS5ReplicaCredentials: {AES-TUhNR0NT... }VoglUB8GG5A... 3.1.9.12. nsds5ReplicaEnabled This attribute sets whether a replication agreement is active, meaning whether replication is occurring per that agreement. The default is on , so that replication is enabled. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsds5ReplicaEnabled: off 3.1.9.13. nsds5ReplicaFlowControlPause This parameters sets the time in milliseconds to pause after reaching the number of entries and updates set in the nsds5ReplicaFlowControlWindow parameter is reached. Updating both the nsds5ReplicaFlowControlWindow and nsds5ReplicaFlowControlPause parameters enables you to fine-tune the replication throughput. For further details, see Section 3.1.9.14, "nsds5ReplicaFlowControlWindow" . This setting does not require restarting the server to take effect. Parameter Description Entry DN cn= replication_agreement_name ,cn=replica,cn= suffix_DN ,cn=mapping tree,cn=config Valid Values 0 to maximum 64-bit long Default Value 2000 Syntax Integer Example nsds5ReplicaFlowControlPause: 2000 3.1.9.14. nsds5ReplicaFlowControlWindow This attribute sets the maximum number of entries and updates sent by a supplier, which are not acknowledged by the consumer. After reaching the limit, the supplier pauses the replication agreement for the time set in the nsds5ReplicaFlowControlPause parameter. Updating both the nsds5ReplicaFlowControlWindow and nsds5ReplicaFlowControlPause parameters enables you to fine-tune the replication throughput. Update this setting if the supplier sends entries and updates faster than the consumer can import or update, and acknowledge the data. In this case, the following message is logged in the supplier's error log file: This setting does not require restarting the server to take effect. Parameter Description Entry DN cn= replication_agreement_name ,cn=replica,cn= suffix_DN ,cn=mapping tree,cn=config Valid Values 0 to maximum 64-bit long Default Value 1000 Syntax Integer Example nsds5ReplicaFlowControlWindow: 1000 3.1.9.15. nsDS5ReplicaHost This attribute sets the host name for the remote server containing the consumer replica. Once this attribute has been set, it cannot be modified. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid host server name Default Value Syntax DirectoryString Example nsDS5ReplicaHost: ldap2.example.com 3.1.9.16. nsDS5ReplicaLastInitEnd This optional, read-only attribute states when the initialization of the consumer replica ended. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values YYYYMMDDhhmmssZ is the date/time in Generalized Time form at which the connection was opened. This value gives the time in relation to Greenwich Mean Time. The hours are set with a 24-hour clock. The Z at the end indicates that the time is relative to Greenwich Mean Time. Default Value Syntax GeneralizedTime Example nsDS5ReplicaLastInitEnd: 20200504121603Z 3.1.9.17. nsDS5ReplicaLastInitStart This optional, read-only attribute states when the initialization of the consumer replica started. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values YYYYMMDDhhmmssZ is the date/time in Generalized Time form at which the connection was opened. This value gives the time in relation to Greenwich Mean Time. The hours are set with a 24-hour clock. The Z at the end indicates that the time is relative to Greenwich Mean Time. Default Value Syntax GeneralizedTime Example nsDS5ReplicaLastInitStart: 20200503030405 3.1.9.18. nsDS5ReplicaLastInitStatus This optional, read-only attribute provides status for the initialization of the consumer. There is typically a numeric code followed by a short string explaining the status. Zero ( 0 ) means success. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values 0 (Consumer Initialization Succeeded), followed by any other status message. Default Value Syntax String Example nsDS5ReplicaLastInitStatus: 0 Consumer Initialization Succeeded 3.1.9.19. nsDS5ReplicaLastUpdateEnd This read-only attribute states when the most recent replication schedule update ended. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values YYYYMMDDhhmmssZ is the date/time in Generalized Time form at which the connection was opened. This value gives the time in relation to Greenwich Mean Time. The hours are set with a 24-hour clock. The Z at the end indicates that the time is relative to Greenwich Mean Time. Default Value Syntax GeneralizedTime Example nsDS5ReplicaLastUpdateEnd: 20200502175801Z 3.1.9.20. nsDS5ReplicaLastUpdateStart This read-only attribute states when the most recent replication schedule update started. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values YYYYMMDDhhmmssZ is the date/time in Generalized Time form at which the connection was opened. This value gives the time in relation to Greenwich Mean Time. The hours are set with a 24-hour clock. The Z at the end indicates that the time is relative to Greenwich Mean Time. Default Value Syntax GeneralizedTime Example nsDS5ReplicaLastUpdateStart: 20200504122055Z 3.1.9.21. nsds5replicaLastUpdateStatus In the read-only nsds5replicaLastUpdateStatus attribute of each replication agreement, Directory Server displays the latest status of the agreement. For a list of status, see Appendix B, Replication Agreement Status . Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values See Appendix B, Replication Agreement Status . Default Value Syntax DirectoryString Example nsds5replicaLastUpdateStatus: Error (0) Replica acquired successfully: Incremental update succeeded 3.1.9.22. nsDS5ReplicaPort This attribute sets the port number for the remote server containing the replica. Once this attribute has been set, it cannot be modified. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Port number for the remote server containing the replica Default Value Syntax Integer Example nsDS5ReplicaPort:389 3.1.9.23. nsDS5ReplicaReapActive This read-only attribute specifies whether the background task that removes old tombstones (deleted entries) from the database is active. See Section 3.1.8.23, "nsDS5ReplicaTombstonePurgeInterval" for more information about this task. A value of zero ( 0 ) means that the task is inactive, and a value of 1 means that the task is active. If this value is set manually, the server ignores the modify request. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values 0 | 1 Default Value Syntax Integer Example nsDS5ReplicaReapActive: 0 3.1.9.24. nsDS5BeginReplicaRefresh Initializes the replica. This attribute is absent by default. However, if this attribute is added with a value of start , then the server initializes the replica and removes the attribute value. To monitor the status of the initialization procedure, poll for this attribute. When initialization is finished, the attribute is removed from the entry, and the other monitoring attributes can be used for detailed status inquiries. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values stop | start Default Value Syntax DirectoryString Example nsDS5BeginReplicaRefresh: start 3.1.9.25. nsDS5ReplicaRoot This attribute sets the DN at the root of a replicated area. This attribute must have the same value as the suffix of the database being replicated and cannot be modified. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Suffix of the database being replicated - same as suffixDN above Default Value Syntax DirectoryString Example nsDS5ReplicaRoot: "dc=example,dc=com" 3.1.9.26. nsDS5ReplicaSessionPauseTime This attribute sets the amount of time in seconds a supplier should wait between update sessions. The default value is 0 . If the attribute is set to a negative value, Directory Server sends the client a message and an LDAP_UNWILLING_TO_PERFORM error code. The nsDS5ReplicaSessionPauseTime attribute works in conjunction with the nsDS5ReplicaBusyWaitTime attribute. The two attributes are designed so that the nsDS5ReplicaSessionPauseTime interval is always at least one second longer than the interval specified for nsDS5ReplicaBusyWaitTime . The longer interval gives waiting suppliers a better chance to gain consumer access before the supplier can re-access the consumer. If either attribute is specified but not both, nsDS5ReplicaSessionPauseTime is set automatically to 1 second more than nsDS5ReplicaBusyWaitTime . If both attributes are specified, but nsDS5ReplicaSessionPauseTime is less than or equal to nsDS5ReplicaBusyWaitTime , nsDS5ReplicaSessionPauseTime is set automatically to 1 second more than nsDS5ReplicaBusyWaitTime . When setting the values, ensure that the nsDS5ReplicaSessionPauseTime interval is at least 1 second longer than the interval specified for nsDS5ReplicaBusyWaitTime . Increase the interval as needed until there is an acceptable distribution of consumer access among the suppliers. Set the nsDS5ReplicaSessionPauseTime attribute at any time by using changetype:modify with the replace operation. The change takes effect for the update session if one is already in progress. If Directory Server has to reset the value of nsDS5ReplicaSessionPauseTime automatically, the value is changed internally only. The change is not visible to clients, and it is not saved to the configuration file. From an external viewpoint, the attribute value appears as originally set. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid integer Default Value 0 Syntax Integer Example nsDS5ReplicaSessionPauseTime: 0 3.1.9.27. nsds5ReplicaStripAttrs Fractional replication allows a list of attributes which are removed from replication updates ( nsDS5ReplicatedAttributeList ). However, a change to an excluded attribute still triggers a modify event and generates an empty replication update. The nsds5ReplicaStripAttrs attribute adds a list of attributes which cannot be sent in an empty replication event and are stripped from the update sequence. This logically includes operational attribtes like modifiersName . If a replication event is not empty, the stripped attributes are replicated. These attributes are removed from updates only if the event would otherwise be emtpy. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range A space-separated list of any supported directory attribute Default Value Syntax DirectoryString Example nsds5ReplicaStripAttrs: modifiersname modifytimestamp 3.1.9.28. nsDS5ReplicatedAttributeList This allowed attribute specifies any attributes that are not replicated to a consumer server. Fractional replication allows databases to be replicated across slow connections or to less secure consumers while still protecting sensitive information. By default, all attributes are replicated, and this attribute is not present. For more information on fractional replication, see the "Managing Replication" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range Default Value Syntax DirectoryString Example nsDS5ReplicatedAttributeList: (objectclass=*) USD EXCLUDE accountlockout memberof 3.1.9.29. nsDS5ReplicatedAttributeListTotal This allowed attribute specifies any attributes that are not replicated to a consumer server during a total update. Fractional replication only replicates specified attributes. This improves the overall network performance. However, there may be times when administrators want to restrict some attributes using fractional replication during an incremental update but allow those attributes to be replicated during a total update (or vice versa). By default, all attributes are replicated. nsDS5ReplicatedAttributeList sets the incremental replication list; if only nsDS5ReplicatedAttributeList is set, then this list applies to total updates as well. nsDS5ReplicatedAttributeListTotal sets the list of attributes to exclude only from a total update. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range Default Value Syntax DirectoryString Example nsDS5ReplicatedAttributeListTotal: (objectclass=*) USD EXCLUDE accountlockout 3.1.9.30. nsDS5ReplicaTimeout This allowed attribute specifies the number of seconds outbound LDAP operations waits for a response from the remote replica before timing out and failing. If the server writes Warning: timed out waiting messages in the error log file, then increase the value of this attribute. Find out the amount of time the operation actually lasted by examining the access log on the remote machine, and then set the nsDS5ReplicaTimeout attribute accordingly to optimize performance. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range 0 to maximum 32-bit integer value (2147483647) in seconds Default Value 120 Syntax Integer Example nsDS5ReplicaTimeout: 120 3.1.9.31. nsDS5ReplicaTransportInfo This attribute sets the type of transport used for transporting data to and from the replica. This attribute cannot be modified once it is set. The attribute takes the following values: StartTLS : The connection uses encryption using the StartTLS command. LDAPS : The connection uses TLS encryption. LDAP : The connection uses the unencrypted LDAP protocol. This value is also used, if the nsDS5ReplicaTransportInfo attribute is not set. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values StartTLS | LDAPS | LDAP Default Value absent Syntax DirectoryString Example nsDS5ReplicaTransportInfo: StartTLS 3.1.9.32. nsDS5ReplicaUpdateInProgress This read-only attribute states whether or not a replication update is in progress. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values true | false Default Value Syntax DirectoryString Example nsDS5ReplicaUpdateInProgress: true 3.1.9.33. nsDS5ReplicaUpdateSchedule This multi-valued attribute specifies the replication schedule and can be modified. Changes made to this attribute take effect immediately. Modifying this value can be useful to pause replication and resume it later. For example, if this value to 0000-0001 0 , this in effect causes the server to stop sending updates for this replication agreement. The server continues to store them for replay later. If the value is later changed back to 0000-2359 0123456 , this makes replication immediately resume and sends all pending changes. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range Time schedule presented as XXXX-YYYY 0123456 , where XXXX is the starting hour, YYYY is the finishing hour, and the numbers 0123456 are the days of the week starting with Sunday. Default Value 0000-2359 0123456 (all the time) Syntax Integer Example nsDS5ReplicaUpdateSchedule: 0000-2359 0123456 3.1.9.34. nsDS5ReplicaWaitForAsyncResults In a replication environment, the nsDS5ReplicaWaitForAsyncResults parameter sets the time in milliseconds for which a supplier waits if the consumer is not ready before resending data. Note that if you set the parameter to 0 , the default value is used. Parameter Description Entry DN cn= ReplicationAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) Default Value 100 Syntax Integer Example nsDS5ReplicaWaitForAsyncResults: 100 3.1.9.35. nsDS50ruv This attribute stores the last replica update vector (RUV) read from the consumer of this replication agreement. It is always present and must not be changed. 3.1.9.36. nsruvReplicaLastModified This attribute contains the most recent time that an entry in the replica was modified and the changelog was updated. 3.1.9.37. nsds5ReplicaProtocolTimeout When stopping the server, disabling replication, or removing a replication agreement, there is a timeout on how long to wait before stopping replication when the server is under load. The nsds5ReplicaProtocolTimeout attribute can be used to configure this timeout and its default value is 120 seconds. There may be scenarios where a timeout of 2 minutes is too long, or not long enough. For example, a particular replication agreement may need more time before ending a replication session during a shutdown. This attribute can be added to the main replication configuration entry for a back end: Parameter Description Entry DN cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) in seconds Default value 120 Syntax Integer Example nsds5ReplicaProtocolTimeout: 120 The nsds5ReplicaProtocolTimeout attribute can also be added to a replication agreement. The replication agreement protocol timeout overrides the timeout set in the main replica configuration entry. This allows different timeouts for different replication agreements. If a replication session is in progress, a new timeout will abort that session and allow the server to shutdown. 3.1.10. Synchronization Attributes under cn=syncAgreementName,cn=WindowsReplica,cn=suffixName,cn=mapping tree,cn=config The synchronization attributes that concern the synchronization agreement are stored under cn= syncAgreementName , cn=WindowsReplica,cn= suffixDN , cn=mapping tree,cn=config . The cn= syncAgreementName entry is an instance of the nsDSWindowsReplicationAgreement object class. For synchronization agreement configuration attributes to be taken into account by the server, this object class (in addition to the top object class) must be present in the entry. Synchronization agreements are configured only on databases that are enabled to synchronize with Windows Active Directory servers. Table 3.6. List of Attributes Shared Between Replication and Synchronization Agreements cn nsDS5ReplicaLastUpdateEnd description nsDS5ReplicaLastUpdateStart nsDS5ReplicaBindDN (the Windows sync manager ID) nsDS5ReplicaLastUpdateStatus nsDS5ReplicaBindMethod nsDS5ReplicaPort nsDS5ReplicaBusyWaitTime nsDS5ReplicaRoot nsDS5ReplicaChangesSentSinceStartup nsDS5ReplicaSessionPauseTime nsDS5ReplicaCredentials (the Windows sync manager password) nsDS5ReplicaTimeout nsDS5ReplicaHost (the Windows host) nsDS5ReplicaTransportInfo nsDS5ReplicaLastInitEnd nsDS5ReplicaUpdateInProgress nsDS5ReplicaLastInitStart nsDS5ReplicaUpdateSchedule nsDS5ReplicaLastInitStatus nsDS50ruv winSyncMoveAction winSyncInterval nsds5ReplicaStripAttrs 3.1.10.1. nsds7DirectoryReplicaSubtree The suffix or DN of the Directory Server subtree that is being synchronized. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid suffix or subsuffix Default Value Syntax DirectoryString Example nsDS7DirectoryReplicaSubtree: ou=People,dc=example,dc=com 3.1.10.2. nsds7DirsyncCookie This string is created by Active Directory DirSync and gives the state of the Active Directory Server at the time of the last synchronization. The old cookie is sent to Active Directory with each Directory Server update; a new cookie is returned along with the Windows directory data. This means only entries which have changed since the last synchronization are retrieved. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any string Default Value Syntax DirectoryString Example nsDS7DirsyncCookie::khDKJFBZsjBDSCkjsdhIU74DJJVBXDhfvjmfvbhzxj 3.1.10.3. nsds7NewWinGroupSyncEnabled This attribute sets whether a new group created in the Windows sync peer is automatically synchronized by creating a new group on the Directory Server. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values on | off Default Value Syntax DirectoryString Example nsDS7NewWinGroupSyncEnabled: on 3.1.10.4. nsds7NewWinUserSyncEnabled This attribute sets whether a new entry created in the Windows sync peer is automatically synchronized by creating a new entry on the Directory Server. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values on | off Default Value Syntax DirectoryString Example nsDS7NewWinUserSyncEnabled: on 3.1.10.5. nsds7WindowsDomain This attribute sets the name of the Windows domain to which the Windows sync peer belongs. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid domain name Default Value Syntax DirectoryString Example nsDS7WinndowsDomain: DOMAINWORLD 3.1.10.6. nsds7WindowsReplicaSubtree The suffix or DN of the Windows subtree that is being synchronized. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values Any valid suffix or subsuffix Default Value Syntax DirectoryString Example nsDS7WindowsReplicaSubtree: cn=Users,dc=domain,dc=com 3.1.10.7. oneWaySync This attribute sets which direction to perform synchronization. This can either be from the Active Directory server to the Directory Server or from the Directory Server to the Active Directory server. If this attribute is absent (the default), then the synchronization agreement is bi-directional , so changes made in both domains are synchronized. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values toWindows | fromWindows | null Default Value Syntax DirectoryString Example oneWaySync: fromWindows 3.1.10.8. winSyncInterval This attribute sets how frequently, in seconds, the Directory Server polls the Windows sync peer to look for changes in the Active Directory entries. If this entry is not set, the Directory Server checks the Windows server every five (5) minutes, meaning the default value is 300 (300 seconds). This value can be set lower to write Active Directory changes over to the Directory Server faster or raised if the directory searches are taking too long. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values 1 to the maximum 32-bit integer value (2147483647) Default Value 300 Syntax Integer Example winSyncInterval: 600 3.1.10.9. winSyncMoveAction The synchronization process starts at the root DN to begin evaluating entries for synchronization. Entries are correlated based on the samAccount in the Active Directory and the uid attribute in Directory Server. The synchronization plug-in notes if a previously synced entry (based on the samAccount/uid relationship) is removed from the synced subtree either because it is deleted or moved, then the synchronization plug-in recognizes that the entry is no longer to be synced. The winSyncMoveAction attribute for the synchronization agreement sets instructions on how to handle these moved entries: none takes no action, so if a synced Directory Server entry exists, it may be synced over to or create an Active Directory entry within scope. If no synced Directory Server entry exists, nothing happens at all (this is the default behavior). unsync removes any sync-related attributes ( ntUser or ntGroup ) from the Directory Server entry but otherwise leaves the Directory Server entry intact. The Active Directory and Directory Server entries exist in tandem. Important There is a risk when unsyncing entries that the Active Directory entry may be deleted at a later time, and the Directory Server entry will be left intact. This can create data inconsistency issues, especially if the Directory Server entry is ever used to recreate the entry on the Active Directory side later. delete deletes the corresponding entry on the Directory Server side, regardless of whether it was ever synced with Active Directory (this was the default behavior in 9.0). Important You almost never want to delete a Directory Server entry without deleting the corresponding Active Directory entry. This option is available only for compatibility with Directory Server 9.0 systems. Parameter Description Entry DN cn= syncAgreementName ,cn=replica,cn= suffixDN ,cn=mapping tree,cn=config Valid Values none | delete | unsync Default Value none Syntax DirectoryString Example winSyncMoveAction: unsync 3.1.11. cn=monitor Information used to monitor the server is stored under cn=monitor . This entry and its children are read-only; clients cannot directly modify them. The server updates this information automatically. This section describes the cn=monitor attributes. The only attribute that can be changed by a user to set access control is the aci attribute. If the nsslapd-counters attribute in cn=config is set to on (the default setting), then all of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For the cn=monitor entry, the 64-bit integers are used with the opsinitiated , opscompleted , entriessent , and bytessent counters. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. connection This attribute lists open connections and associated status and performance related information and values. These are given in the following format: For example: A is the connection number, which is the number of the slot in the connection table associated with this connection. This is the number logged as slot= A in the access log message when this connection was opened, and usually corresponds to the file descriptor associated with the connection. The attribute dTableSize shows the total size of the connection table. YYYYMMDDhhmmssZ is the date and time, in GeneralizedTime form, at which the connection was opened. This value gives the time in relation to Greenwich Mean Time. B is the number of operations received on this connection. C is the number of completed operations. D is r if the server is in the process of reading BER from the network, empty otherwise. This value is usually empty (as in the example). E this is the bind DN. This may be empty or have value of NULLDN for anonymous connections. F is the connection maximum threads state: 1 is in max threads, 0 is not. G is the number of times this thread has hit the maximum threads value. H is the number of operations attempted that were blocked by the maximum number of threads. I is the connection ID as reported in the logs as conn= connection_ID . IP_address is the IP address of the LDAP client. Note B and C for the initiated and completed operations should ideally be equal. currentConnections This attribute shows the number of currently open and active Directory Server connections. totalConnections This attribute shows the total number of Directory Server connections. This number includes connections that have been opened and closed since the server was last started in addition to the currentConnections . dTableSize This attribute shows the size of the Directory Server connection table. Each connection is associated with a slot in this table, and usually corresponds to the file descriptor used by this connection. See Section 3.1.1.62, "nsslapd-conntablesize" for more information. readWaiters This attribute shows the number of connections where some requests are pending and not currently being serviced by a thread in Directory Server. opsInitiated This attribute shows the number of Directory Server operations initiated. opsCompleted This attribute shows the number of Directory Server operations completed. entriesSent This attribute shows the number of entries sent by Directory Server. bytesSent This attribute shows the number of bytes sent by Directory Server. currentTime This attribute shows the current time, given in Greenwich Mean Time (indicated by generalizedTime syntax Z notation; for example, 20200202131102Z ). startTime This attribute shows the Directory Server start time given in Greenwich Mean Time, indicated by generalizedTime syntax Z notation. For example, 20200202131102Z . version This attribute shows the Directory Server vendor, version, and build number. For example, Red Hat/11.3.1 B2020.274.08 . threads This attribute shows the number of threads used by the Directory Server. This should correspond to nsslapd-threadnumber in cn=config . nbackEnds This attribute shows the number of Directory Server database back ends. backendMonitorDN This attribute shows the DN for each Directory Server database backend. For further information on monitoring the database, see the following sections: Section 4.4.9, "Database Attributes under cn=attributeName,cn=encrypted attributes,cn=database_name,cn=ldbm database,cn=plugins,cn=config" Section 4.4.5, "Database Attributes under cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config" Section 4.5.4, "Database Link Attributes under cn=monitor,cn=database instance name,cn=chaining database,cn=plugins,cn=config" 3.1.12. cn=replication This entry has no attributes. When configuring legacy replication, those entries are stored under this cn=replication node, which serves as a placeholder. 3.1.13. cn=sasl Entries which contain SASL mapping configurations are stored under cn=mapping,cn=sasl,cn=config . The cn=sasl entry is an instance of the nsContainer object class. Each mapping underneath it is an instance of the nsSaslMapping object class. 3.1.13.1. nsSaslMapBaseDNTemplate This attribute contains the search base DN template used in SASL identity mapping. Parameter Description Entry DN cn= mapping_name ,cn=mapping,cn=sasl,cn=config Valid Values Any valid DN Default Value Syntax IA5String Example nsSaslMapBaseDNTemplate: ou=People,dc=example,dc=com 3.1.13.2. nsSaslMapFilterTemplate This attribute contains the search filter template used in SASL identity mapping. Parameter Description Entry DN cn= mapping_name ,cn=mapping,cn=sasl,cn=config Valid Values Any string Default Value Syntax IA5String Example nsSaslMapFilterTemplate: (cn=\1) 3.1.13.3. nsSaslMapPriority Directory Server enables you to set multiple simple authentication and security layer (SASL) mappings. If SASL fallback is enabled by the nsslapd-sasl-mapping-fallback parameter, you can set the nsSaslMapPriority attribute to prioritize the individual SASL mappings. This setting does not require a server restart to take effect. Parameter Description Entry DN cn= mapping_name ,cn=mapping,cn=sasl,cn=config Valid Values 1 (highest priority) - 100 (lowest priority) Default Value 100 Syntax Integer Example nsSaslMapPriority: 100 3.1.13.4. nsSaslMapRegexString This attribute contains a regular expression used to map SASL identity strings. Parameter Description Entry DN cn= mapping_name ,cn=mapping,cn=sasl,cn=config Valid Values Any valid regular expression Default Value Syntax IA5String Example nsSaslMapRegexString: \(.*\) 3.1.14. cn=SNMP SNMP configuration attributes are stored under cn=SNMP,cn=config . The cn=SNMP entry is an instance of the nsSNMP object class. 3.1.14.1. nssnmpenabled This attribute sets whether SNMP is enabled. Parameter Description Entry DN cn=SNMP,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nssnmpenabled: off 3.1.14.2. nssnmporganization This attribute sets the organization to which the Directory Server belongs. Parameter Description Entry DN cn=SNMP,cn=config Valid Values Organization name Default Value Syntax DirectoryString Example nssnmporganization: Red Hat, Inc. 3.1.14.3. nssnmplocation This attribute sets the location within the company or organization where the Directory Server resides. Parameter Description Entry DN cn=SNMP,cn=config Valid Values Location Default Value Syntax DirectoryString Example nssnmplocation: B14 3.1.14.4. nssnmpcontact This attribute sets the email address of the person responsible for maintaining the Directory Server. Parameter Description Entry DN cn=SNMP,cn=config Valid Values Contact email address Default Value Syntax DirectoryString Example nssnmpcontact: [email protected] 3.1.14.5. nssnmpdescription Provides a unique description of the Directory Server instance. Parameter Description Entry DN cn=SNMP,cn=config Valid Values Description Default Value Syntax DirectoryString Example nssnmpdescription: Employee directory instance 3.1.14.6. nssnmpmasterhost nssnmpmasterhost is deprecated. This attribute is deprecated with the introduction of net-snmp . The attribute still appears in dse.ldif but without a default value. Parameter Description Entry DN cn=SNMP,cn=config Valid Values machine host name or localhost Default Value <blank> Syntax DirectoryString Example nssnmpmasterhost: localhost 3.1.14.7. nssnmpmasterport The nssnmpmasterport attribute was deprecated with the introduction of net-snmp . The attribute still appears in dse.ldif but without a default value. Parameter Description Entry DN cn=SNMP,cn=config Valid Values Operating system dependent port number. See the operating system documentation for further information. Default Value <blank> Syntax Integer Example nssnmpmasterport: 199 3.1.15. SNMP Statistic Attributes Table 3.7, "SNMP Statistic Attributes" contains read-only attributes which list the statistics available for LDAP and SNMP clients. Unless otherwise noted, the value for the given attribute is the number of requests received by the server or results returned by the server since startup. Some of these attributes are not used by or are not applicable to the Directory Server but are still required to be present by SNMP clients. If the nsslapd-counters attribute in cn=config is set to on (the default setting), then all of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. All of the SNMP statistics attributes use the 64-bit integers, if it is configured. Note The nsslapd-counters attribute enables 64-bit integers for these specific database and server counters. The counters which use 64-bit integers are not configurable; 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. Table 3.7. SNMP Statistic Attributes Attribute Description AnonymousBinds This shows the number of anonymous bind requests. UnAuthBinds This shows the number of unauthenticated (anonymous) binds. SimpleAuthBinds This shows the number of LDAP simple bind requests (DN and password). StrongAuthBinds This shows the number of LDAP SASL bind requests, for all SASL mechanisms. BindSecurityErrors This shows the number of number of times an invalid password was given in a bind request. InOps This shows the total number of all requests received by the server. ReadOps Not used. This value is always 0 . CompareOps This shows the number of LDAP compare requests. AddEntryOps This shows the number of LDAP add requests. RemoveEntryOps This shows the number of LDAP delete requests. ModifyEntryOps This shows the number of LDAP modify requests. ModifyRDNOps This shows the number of LDAP modify RDN (modrdn) requests. ListOps Not used. This value is always 0 . SearchOps This shows the number of LDAP search requests. OneLevelSearchOps This shows the number of one-level search operations. WholeSubtreeSearchOps This shows the number of subtree-level search operations. Referrals This shows the number of LDAP referrals returned. Chainings Not used. This value is always 0 . SecurityErrors This shows the number of errors returned that were security related, such as invalid passwords, unknown or invalid authentication methods, or stronger authentication required. Errors This shows the number of errors returned. Connections This shows the number of currently open connections. ConnectionSeq This shows the total number of connections opened, including both currently open and closed connections. BytesRecv This shows the number of bytes received. BytesSent This shows the number of bytes sent. EntriesReturned This shows the number of entries returned as search results. ReferralsReturned This provides information on referrals returned as search results (continuation references). MasterEntries Not used. This value is always 0 . CopyEntries Not used. This value is always 0 . CacheEntries [a] If the server has only one database back end, this is the number of entries cached in the entry cache. If the server has more than one database back end, this value is 0 , and see the monitor entry for each one for more information. CacheHits If the server has only one database back end, this is the number of entries returned from the entry cache, rather than from the database, for search results. If the server has more than one database back end, this value is 0 , and see the monitor entry for each one for more information. SlaveHits Not used. This value is always 0 . [a] CacheEntries and CacheHits are updated every ten (10) seconds. Red Hat strongly encourages using the database back end specific monitor entries for this and other database information. 3.1.16. cn=tasks Some core Directory Server tasks can be initiated by editing a directory entry using LDAP tools. These task entries are contained in cn=tasks . Each task can be invoked by updating an entry such as the following: In Red Hat Directory Server deployments before Directory Server 8.0, many Directory Server tasks were managed by the Administration Server. These tasks were moved to the core Directory Server configuration in version 8.0 and are invoked and administered by Directory Server under the cn=tasks entry. There following tasks are managed under the cn=tasks entry: Section 3.1.16.2, "cn=import" Section 3.1.16.3, "cn=export" Section 3.1.16.4, "cn=backup" Section 3.1.16.5, "cn=restore" Section 3.1.16.6, "cn=index" Section 3.1.16.7, "cn=schema reload task" Section 3.1.16.8, "cn=memberof task" Section 3.1.16.9, "cn=fixup linked attributes" Section 3.1.16.10, "cn=syntax validate" Section 3.1.16.11, "cn=USN tombstone cleanup task" Section 3.1.16.12, "cn=cleanallruv" Section 3.1.16.13, "cn=abort cleanallruv" Section 3.1.16.14, "cn=automember rebuild membership" Section 3.1.16.15, "cn=automember export updates" Section 3.1.16.16, "cn=automember map updates" The common attributes for these tasks are listed in Section 3.1.16.1, "Task Invocation Attributes for Entries under cn=tasks" . The cn=tasks entry itself has no attributes and serves as the parent and container entry for the individual task entries. Important Task entries are not permanent configuration entries. They only exist in the configuration file for as long as the task operation is running or until the ttl period expires. Then, the entry is deleted automatically by the server. 3.1.16.1. Task Invocation Attributes for Entries under cn=tasks Five tasks which administer Directory Server instances have configuration entries which initiate and identify individual operations. These task entries are instances of the same object class, extensibleObject , and have certain common attributes which describe the state and behavior of Directory Server tasks. The task types can be import, export, backup, restore, index, schema reload, and memberof. cn The cn attribute identifies a new task operation to initiate. The cn attribute value can be anything, as long as it defines a new task. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values Any string Default Value Syntax DirectoryString Example cn: example task entry name nsTaskStatus This attribute contains changing information about the status of the task, such as cumulative statistics or its current output message. The entire contents of the attribute may be updated periodically for as long as the process is running. This attribute value is set by the server and should not be edited. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values Any string Default Value Syntax case-exact string Example nsTaskStatus: Loading entries... . nsTaskLog This entry contains all of the log messages for the task, including bothwarning and information messages. New messages are appended to the end of the entry value, so this attribute value grows larger, without erasing the original contents, by default. Successful task operations, which have an nsTaskExitCode of 0 , are only recorded in the nsTaskLog attribute. Any non-zero response, which indicates an error, may be recorded in the error log as an error, but the error message is only recorded in the nsTaskLog attribute. For this reason, use the information in the nsTaskLog attribute to find out what errors actuall occurred. This attribute value is set by the server and should not be edited. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values Any string Default Value Syntax Case-exact string Example nsTaskLog: example... nsTaskExitCode This attribute contains the exit code for the task. This attribute only exists after the task is completed and any value is only valid if the task is complete. The result code can be any LDAP exit code, as listed in Section 7.4, "LDAP Result Codes" , but only a 0 value equals success; any other result code is an error. This attribute value is set by the server and should not be edited. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values 0 (success) to 97 [a] Default Value Syntax Integer Example nsTaskExitCode: 0 [a] Any response other than 0 is an error. nsTaskCurrentItem This attribute shows the number of subtask which the task operation has completed, assuming the task can be broken down into subtasks. If there is only one task, then nsTaskCurrentItem is 0 while the task is running, and 1 when the task is complete. In this way, the attribute is analogous to a progress bar. When the nsTaskCurrentItem attribute has the same value as nsTaskTotalItems , then the task is completed. This attribute value is set by the server and should not be edited. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values 0 to the maximum 32 bit integer value (2147483647) Default Value Syntax Integer Example nsTaskCurrentItem: 148 nsTaskTotalItems This attribute shows the total number of subtasks that must be completed for the task operation. When the nsTaskCurrentItem attribute has the same value as nsTaskTotalItems , then the task is completed. This attribute value is set by the server and should not be edited. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values 0 to the maximum 32 bit integer value (2147483647) Default Value Syntax Integer Example nsTaskTotalItems: 152 nsTaskCancel This attribute allows a task to be aborted while in progress. This attribute can be modified by users. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values true | false Default Value Syntax Case-insensitive string Example nsTaskCancel: true ttl This attribute sets the amount of time (in seconds) the task entry will remain in the DSE after the task has finished or aborted. Setting a ttl attribute allows the task entry to be polled for new status information without missing the exit code. Setting the ttl attribute to 0 means that the entry is not cached. Parameter Description Entry DN cn= task_name ,cn= task_type ,cn=tasks,cn=config Valid Values 0 (cannot be cached) to the maximum 32 bit integer value (2147483647) Default Value Syntax DirectoryString Example ttl: 120 3.1.16.2. cn=import An LDIF file or multiple LDIF files can be imported through the command line by creating a special task entry which defines the parameters of the task and initiates the task. As soon as the task is complete, the task entry is removed from the directory. The cn=import entry is a container entry for import task operations. The cn=import entry itself has no attributes, but each of the task entries within this entry, such as cn= task_ID , cn=import , cn=tasks , cn=config , uses the following attributes to define the import task. An import task entry under cn=import must contain the LDIF file to import (in the nsFilename attribute) and the name of the instance into which to import the file (in the nsInstance attribute). Additionally, it must contain a unique cn to identify the task. For example: As the import operation runs, the task entry will contain all of the server-generated task attributes listed in Section 3.1.16.1, "Task Invocation Attributes for Entries under cn=tasks" . There are some optional attributes which can be used to refine the import operation, similar to the options for the ldif2db and ldif2db.pl scripts: nsIncludeSuffix , which is analogous to the -s option to specify the suffix to import nsExcludeSuffix , analogous to the -x option to specify a suffix or subtree to exclude from the import nsImportChunkSize , analogous to the -c option to override starting a new pass during the import and merge the chunks nsImportIndexAttrs , which sets whether to import attribute indexes (with no corollary in the script options) nsUniqueIdGenerator , analogous to the -g option to generate unique ID numbers for the entries nsUniqueIdGeneratorNamespace , analogous to the -G option to generate a unique, name-based ID for the entries nsFilename The nsFilename attribute contains the path and filenames of the LDIF files to import into the Directory Server instance. To import multiple files, add multiple instances of this attribute. For example: Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values Any string Default Value Syntax Case-exact string, multi-valued Example nsFilename: /home/jsmith/example.ldif nsInstance This attribute supplies the name of the database instance into which to import the files, such as userRoot or slapd-example . Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values The name of a Directory Server instance database (any string) Default Value Syntax Case-exact string Example nsInstance: userRoot nsIncludeSuffix This attribute identifies a specific suffix or subtree to import from the LDIF file. Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values Any DN Default Value Syntax DN, multi-valued Example nsIncludeSuffix: ou=people,dc=example,dc=com nsExcludeSuffix This attribute identifies suffixes or subtrees in the LDIF file to exclude from the import. Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values Any DN Default Value Syntax DN, multi-valued Example nsExcludeSuffix: ou=machines,dc=example,dc=com nsImportChunkSize This attribute defines the number of chunks to have during the import operation, which overrides the server's detection during the import of when to start a new pass and merges the chunks. Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values 0 to the maximum 32 bit integer value (2147483647) Default Value 0 Syntax Integer Example nsImportChunkSize: 10 nsImportIndexAttrs This attribute sets whether to index the attributes that are imported into database instance. Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values true | false Default Value true Syntax Case-insensitive string Example nsImportIndexAttrs: true nsUniqueIdGenerator This sets whether to generate a unique ID for the imported entries. By default, this attribute generates time-based IDs. Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values none (no unique ID) | empty (time-based ID) | deterministic namespace (name-based ID) Default Value empty Syntax Case-insensitive string Example nsUniqueIdGenerator: nsUniqueIdGeneratorNamespace This attribute defines how to generate name-based IDs; the attribute sets the namespace to use to generate the IDs. This option is useful to import the same LDIF file into two Directory Server instances when the entries need to have the same IDs. Parameter Description Entry DN cn= task_name ,cn=import,cn=tasks,cn=config Valid Values Any string Default Value Syntax Case-insensitive string Example nsUniqueIdGeneratorNamespace: example 3.1.16.3. cn=export A database or multiple databases can be exported through the command line by creating a special task entry which defines the parameters of the task and initiates the task. As soon as the task is complete, the task entry is removed from the directory. The cn=export,cn=tasks,cn=config entry is a container for export task operations. These tasks are stored within this container and named cn= task_name ,cn=export,cn=tasks,cn=config . While the export operation is running, the task entry contains all of the server-generated task attributes listed in Section 3.1.16.1, "Task Invocation Attributes for Entries under cn=tasks" . You can create export tasks manually or use the db2ldif.pl command. The following table displays the db2ldif.pl command-line options and their corresponding attributes: db2ldif.pl option Task attribute Description -a nsFilename Sets the path to the exported LDIF file. -C nsUseId2Entry If enabled, use only the main database file only. -M nsUseOneFile If enabled, store output in multiple files. -n nsInstance Sets the database name. -N nsPrintKey Enables you to suppress printing the sequence number. -r nsExportReplica If set, the export will include attributes to initialize a replica. -s nsIncludeSuffix Sets the suffix to include in the exported file. -u nsDumpUniqId Enables you not to export the unique ID. -U nsNoWrap If set, long lines are not wrapped. -x nsExcludeSuffix Sets the suffix to exclude in the exported file. nsFilename The nsFilename attribute contains the path and filenames of the LDIF files to which to export the Directory Server instance database. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values Any string Default Value Syntax Case-exact string, multi-valued Example nsFilename: /home/jsmith/example.ldif nsInstance This attribute supplies the name of the database instance from which to export the database, such as userRoot or userRoot . Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values The name of a Directory Server instance (any string) Default Value Syntax Case-exact string, multi-valued Example nsInstance: userRoot nsIncludeSuffix This attribute identifies a specific suffix or subtree to export to an LDIF file. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values Any DN Default Value Syntax DN, multi-valued Example nsIncludeSuffix: ou=people,dc=example,dc=com nsExcludeSuffix This attribute identifies suffixes or subtrees in the database to exclude from the exported LDIF file. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values Any DN Default Value Syntax DN, multi-valued Example nsExcludeSuffix: ou=machines,dc=example,dc=com nsUseOneFile This attribute sets whether to export all Directory Server instances to a single LDIF file or separate LDIF files. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values true | false Default Value true Syntax Case-insensitive string Example nsUseOneFile: true nsExportReplica This attribute identifies whether the exported database will be used in replication. For replicas, the proper attributes and settings will be included with the entry to initialize the replica automatically. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values true | false Default Value false Syntax Case-insensitive string Example nsExportReplica: true nsPrintKey This attribute sets whether to print the entry ID number as the entry is processed by the export task. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values true | false Default Value true Syntax Case-insensitive string Example nsPrintKey: false nsUseId2Entry The nsUseId2Entry attribute uses the main database index, id2entry , to define the exported LDIF entries. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values true | false Default Value false Syntax Case-insensitive string Example nsUseId2Entry: true nsNoWrap This attribute sets whether to wrap long lines in the LDIF file. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values true | false Default Value false Syntax Case-insensitive string Example nsNoWrap: false nsDumpUniqId This attribute sets that the unique IDs for the exported entries are not exported. Parameter Description Entry DN cn= task_name ,cn=export,cn=tasks,cn=config Valid Values true | false Default Value true Syntax Case-insensitive string Example nsDumpUniqId: true 3.1.16.4. cn=backup A database can be backed up through the command line by creating a special task entry which defines the parameters of the task and initiates the task. As soon as the task is complete, the task entry is removed from the directory. The cn=backup entry is a container entry for backup task operations. The cn=backup entry itself has no attributes, but each of the task entries within this entry, such as cn= task_ID , cn=backup , cn=tasks , cn=config , uses the following attributes to define the backup task. A backup task entry under cn=backup must contain the location of the directory to which to copy the archive copy (in the nsArchiveDir attribute) and the type of database being backed up (in the nsDatabaseType attribute). Additionally, it must contain a unique cn to identify the task. For example: As the backup operation runs, the task entry will contain all of the server-generated task attributes listed in Section 3.1.16.1, "Task Invocation Attributes for Entries under cn=tasks" . nsArchiveDir This attribute gives the location of the directory to which to write the backup. The backup directory here should usually be the same as the one configured in the nsslapd-bakdir attribute. If this attribute is not included with the cn=backup task, the task will fail with an LDAP object class violation error ( 65 ). Parameter Description Entry DN cn= task_name ,cn=backup,cn=tasks,cn=config Valid Values Any local directory location Default Value Syntax Case-exact string Example nsArchiveDir: /export/backups nsDatabaseType This attribute gives the kind of database being archived. Setting the database types signals what kind of backup plug-in the Directory Server should use to archive the database. Parameter Description Entry DN cn= task_name ,cn=backup,cn=tasks,cn=config Valid Values ldbm database Default Value ldbm database Syntax Case-exact string Example nsDatabaseType: ldbm database 3.1.16.5. cn=restore A database can be restored through the command line by creating a special task entry which defines the parameters of the task and initiates the task. As soon as the task is complete, the task entry is removed from the directory. The cn=restore entry is a container entry for task operations to restore a database. The cn=restore entry itself has no attributes, but each of the task entries within this entry, such as cn= task_ID , cn=restore , cn=tasks , cn=config , uses the following attributes to define the restore task. A restore task entry under cn=restore must contain the location of the directory from which to retrieve the archive copy (in the nsArchiveDir attribute) and the type of database being restored (in the nsDatabaseType attribute). Additionally, it must contain a unique cn to identify the task. For example: As the restore operation runs, the task entry will contain all of the server-generated task attributes listed in Section 3.1.16.1, "Task Invocation Attributes for Entries under cn=tasks" . nsArchiveDir This attribute gives the location of the directory to which to write the backup. Parameter Description Entry DN cn= task_name ,cn=restore,cn=tasks,cn=config Valid Values Any local directory location Default Value Syntax Case-exact string Example nsArchiveDir: /export/backups nsDatabaseType This attribute gives the kind of database being archived. Setting the database types signals what kind of backup plug-in the Directory Server should use to archive the database. Parameter Description Entry DN cn= task_name ,cn=restore,cn=tasks,cn=config Valid Values ldbm database Default Value ldbm database Syntax Case-exact string Example nsDatabaseType: ldbm database 3.1.16.6. cn=index Directory attributes can be indexed though the command line by creating a special task entry which defines the parameters of the task and initiates the task. As soon as the task is complete, the task entry is removed from the directory. The cn=index entry is a container entry for index task operations. The cn=index entry itself has no attributes, but each of the task entries within this entry, such as cn= task_ID , cn=index , cn=tasks , cn=config , uses the following attributes to define the backup task. An index task entry under cn=index can create a standard index by identifying the attribute to be indexed and the type of index to create, both defined in the nsIndexAttribute attribute. Alternatively, the index task can be used to generate virtual list view (VLV) indexes for an attribute using the nsIndexVLVAttribute attribute. This is the same as running the vlvindex script. For example: As the index operation runs, the task entry will contain all of the server-generated task attributes listed in Section 3.1.16.1, "Task Invocation Attributes for Entries under cn=tasks" . nsIndexAttribute This attribute gives the name of the attribute to index and the types of indexes to apply. The format of the attribute value is the attribute name and a comma-separated list of index types, enclosed in double quotation marks. For example: Parameter Description Entry DN cn= task_name ,cn=index,cn=tasks,cn=config Valid Values * Any attribute * The index type, which can be pres (presence), eq (equality), approx (approximate), and sub (substring) Default Value Syntax Case-insensitive string, multi-valued Example * nsIndexAttribute: cn:pres,eq * nsIndexAttribute: description:sub nsIndexVLVAttribute This attribute gives the name of the target entry for a VLV index. A virtual list view is based on a browsing index entry (as described in the Administration Guide ), which defines the virtual list base DN, scope, and filter. The nsIndexVLVAttribute value is the browsing index entry, and the VLV creation task is run according to the browsing index entry parameters. Parameter Description Entry DN cn= task_name ,cn=index,cn=tasks,cn=config Valid Values RDN of the subentry of the VLV entry definition Default Value Syntax DirectoryString Example nsIndexVLVAttribute: " browsing index sort identifier " 3.1.16.7. cn=schema reload task The directory schema is loaded when the directory instance is started or restarted. Any changes to the directory schema, including adding custom schema elements, are not loaded automatically and available to the instance until the server is restarted or by initiating a schema reload task. Custom schema changes can be reloaded dynamically, without having to restart the Directory Server instance. This is done by initiating a schema reload task through creating a new task entry under the cn=tasks entry. The custom schema file can be located in any directory; if not specified with the schemadir attribute, the server reloads the schema from the default /etc/dirsrv/slapd- instance /schema directory. Important Any schema loaded from another directory must be copied into the schema directory or the schema will be lost when the server. The schemd reload task is initiated though the command line by creating a special task entry which defines the parameters of the task and initiates the task. As soon as the task is complete, the task entry is removed from the directory. For example: The cn=schema reload task entry is a container entry for schema reload operations. The cn=schema reload task entry itself has no attributes, but each of the task entries within this entry, such as cn= task_ID , cn=schema reload task , cn=tasks , cn=config , uses the schema reload attributes to define the individual reload task. cn The cn attribute identifies a new task operation to initiate. The cn attribute value can be anything, as long as it defines a new task. Parameter Description Entry DN cn= task_name ,cn=schema reload task,cn=tasks,cn=config Valid Values Any string Default Value Syntax DirectoryString Example cn: example reload task ID schemadir This contains the full path to the directory containing the custom schema file. Parameter Description Entry DN cn= task_name ,cn=schema reload task,cn=tasks,cn=config Valid Values Any local directory path Default Value /etc/dirsrv/schema Syntax DirectoryString Example schemadir: /export/schema/ 3.1.16.8. cn=memberof task The memberOf attribute is created and managed by the Directory Server automatically to display group membership on the members' user entries. When the member attribute on a group entry is changed, all of the members' associated directory entries are automatically updated with their corresponding memberOf attributes. The cn=memberof task (and the related fixup-memberof.pl script) is used to create the initial memberOf attributes on the member's user entries in the directory. After the memberOf attributes are created, then the MemberOf Plug-in manages the memberOf attributes automatically. The memberOf update task must give the DN of the entry or subtree to run the update task against (set in the basedn attribute). Optionally, the task can include a filter to identify the members' user entries to update (set in the filter attribute). For example: When the task is complete, the task entry is removed from the directory. The cn=memberof task entry is a container entry for memberOf update operations. The cn=memberof task entry itself has no attributes, but each of the task entries beneath this entry, such as cn= task_ID , cn=memberof task , cn=tasks , cn=config , uses its attributes to define the individual update task. basedn This attribute gives the base DN to use to search for the user entries to update the memberOf attribute. Parameter Description Entry DN cn= task_name ,cn=memberof task,cn=tasks,cn=config Valid Values Any DN Default Value Syntax DN Example basedn: ou=people,dc=example,dc=com filter This attribute gives an optional LDAP filter to use to select which user entries to update the memberOf attribute. Each member of a group has a corresponding user entry in the directory. Parameter Description Entry DN cn= task_name ,cn=memberof task,cn=tasks,cn=config Valid Values Any LDAP filter Default Value (objectclass=*) Syntax DirectoryString Example filter: (l=Sunnyvale) 3.1.16.9. cn=fixup linked attributes The Directory Server has a Linked Attributes Plug-in which allows one attribute, set in one entry, to update another attribute in another entry automatically. Both entries have DNs for values. The DN value in the first entry points to the entry for the plug-in to update; the attribute in the second entry contains a DN back-pointer to the first entry. This is similar to the way that the MemberOf Plug-in uses the member attribute in group entries to set memberOf attribute in user entries. With linked attributes, any attribute can be defined as a "link," and then another attribute is "managed" in affected entries. The cn=fixup linked attributes (and the related fixup-linkedattrs.pl script) creates the managed attributes - based on link attributes that already exist in the database - in the user entries once the linking plug-in instance is created. After the linked and managed attributes are set, the Linked Attributes Plug-in maintains the managed attributes dynamically, as users change the link attributes. The linked attributes update task can specify which linked attribute plug-in instance to update, set in the optional linkdn attribute. If this attribute is not set on the task entry, then all configured linked attributes are updated. When the task is complete, the task entry is removed from the directory. The cn=fixup linked attributes entry is a container entry for any linked attribute update operation. The cn=fixup linked attributes entry itself has no attributes related to individual tasks, but each of the task entries beneath this entry, such as cn= task_ID , cn=fixup linked attributes , cn=tasks , cn=config , uses its attributes to define the individual update task. linkdn Each linked-managed attribute pair is configured in a linked attributes plug-in instance. The linkdn attribute sets the specific linked attribute plug-in used to update the entries by giving the plug-in instance DN. For example: If no plug-in instance is given, then all linked attributes are updated. Parameter Description Entry DN cn= task_name ,cn=fixup linked attributes,cn=tasks,cn=config Valid Values A DN (for an instance of the Linked Attributes plug-in) Default Value None Syntax DN Example linkdn: cn=Manager Links,cn=Linked Attributes,cn=plugins,cn=config 3.1.16.10. cn=syntax validate Syntax validation checks every modification to attributes to make sure that the new value has the required syntax for that attribute type. Attribute syntaxes are validated against the definitions in RFC 4514 . Syntax validation is enabled by default. However, syntax validation only audits changes to attribute values, such as when an attribute is added or modified. It does not validate the syntax of existing attribute values. Validation of the existing syntax can be done with the syntax validation task. This task checks entries under a specified subtree (in the basedn attribute) and, optionally, only entries which match a specified filter (in the filter attribute). When the task is complete, the task entry is removed from the directory. If syntax validation is disabled or if a server is migrated, then there may be data in the server which does not conform to attribute syntax requirements. The syntax validation task can be run to evaluate those existing attribute values before enabling syntax validation. The cn=syntax validate entry is a container entry for any syntax validation operation. The cn=syntax validate entry itself has no attributes that are specific to any task. Each of the task entries beneath this entry, such as cn= task_ID , cn=syntax validate , cn=tasks , cn=config , uses its attributes to define the individual update task. basedn Gives the subtree against which to run the syntax validation task. For example: Parameter Description Entry DN cn= task_name ,cn=syntax validate,cn=tasks,cn=config Valid Values Any DN Default Value None Syntax DN Example basedn: dc=example,dc=com filter Contains an optional LDAP filter which can be used to identify specific entries beneath the given basedn against which to run the syntax validation task. If this attribute is not set on the task, then every entry within the basedn is audited. For example: Parameter Description Entry DN cn= task_name ,cn=syntax validate,cn=tasks,cn=config Valid Values Any LDAP filter Default Value "(objectclass=*)" Syntax DirectoryString Example filter: "(objectclass=*)" 3.1.16.11. cn=USN tombstone cleanup task If the USN Plug-in is enabled, then update sequence numbers (USNs) are set on every entry whenever a directory write operation, like add or modify, occurs on that entry. This is reflected in the entryUSN operational attribute. This USN is set even when an entry is deleted, and the tombstone entries are maintained by the Directory Server instance. The cn=USN tombstone cleanup task (and the related usn-tombstone-cleanup.pl script) deletes the tombstone entries maintained by the instance according to the back end database (in the backend attribute) or the suffix (in the suffix attribute). Optionally, only a subset of tombstone entries can be deleted by specifying a maximum USN to delete (in the max_usn_to_delete attribute), which preserves the most recent tombstone entries. Important This task can only be launched if replication is not enabled. Replication maintains its own tombstone store, and these tombstone entries cannot be deleted by the USN Plug-in; they must be maintained by the replication processes. Thus, Directory Server prevents users from running the cleanup task for replicated databases. Attempting to create this task entry for a replicated back end will return this error in the command line: In the error log, there is a more explicit message that the suffix cannot have tombstone removed because it is replicated. When the task is complete, the task entry is removed from the directory. The cn=USN tombstone cleanup task entry is a container entry for all USN tombstone delete operations. The cn=USN tombstone cleanup task entry itself has no attributes related to any individual task, but each of the task entries beneath this entry, such as cn= task_ID , cn=USN tombstone cleanup task , cn=tasks , cn=config , uses its attributes to define the individual update task. backend This gives the Directory Server instance back end, or database, to run the cleanup operation against. If the back end is not specified, then the suffix must be specified. Parameter Description Entry DN cn= task_name ,cn=USN tombstone cleanup task,cn=tasks,cn=config Valid Values Database name Default Value None Syntax DirectoryString Example backend: userroot max_usn_to_delete This gives the highest USN value to delete when removing tombstone entries. All tombstone entries up to and including that number are deleted. Tombstone entries with higher USN values (that means more recent entries) are not deleted. Parameter Description Entry DN cn= task_name ,cn=USN tombstone cleanup task,cn=tasks,cn=config Valid Values Any integer Default Value None Syntax Integer Example max_usn_to_delete: 500 suffix This gives the suffix or subtree in the Directory Server to run the cleanup operation against. If the suffix is not specified, then the back end must be given. Parameter Description Entry DN cn= task_name ,cn=USN tombstone cleanup task,cn=tasks,cn=config Valid Values Any subtree DN Default Value None Syntax DN Example suffix: dc=example,dc=com 3.1.16.12. cn=cleanallruv Information about the replication topology - all of the suppliers which are supplying updates to each other and other replicas within the same replication group - is contained in a set of metadata called the replica update vector (RUV) . The RUV contains information about the supplier like its ID and URL, its latest change state number for changes made on the local server, and the CSN of the first change. Both suppliers and consumers store RUV information, and they use it to control replication updates. When one supplier is removed from the replication topology, it may remain in another replica's RUV. When the other replica is restarted, it can record errors in its log that the replication plug-in does not recognize the (removed) supplier. When the supplier is permanently removed from the topology, then any lingering metadata about that supplier should be purged from every other supplier's RUV entry. The cn=cleanallruv task propagates through all servers in the replication topology and removes the RUV entries associated with the specified missing or obsolete supplier. When the task is complete, the task entry is removed from the directory. The cn=cleanallruv entry is a container entry for all clean RUV operations. The cn=cleanallruv entry itself has no attributes related to any individual task, but each of the task entries beneath this entry, such as cn= task_ID , cn=cleanallruv , cn=tasks , cn=config , uses its attributes to define the individual update task. Each clean RUV task must specify the replica ID number of the replica RUV entries to remove, the based DN of the replicated database, and whether remaining updates from the missing supplier should be applied before removing the RUV data. replica-base-dn This gives the Directory Server base DN associated with the replicated database. This is the base DN for the replicated suffix. Parameter Description Entry DN cn= task_name ,cn=cleanallruv,cn=tasks,cn=config Valid Values Directory suffix DN Default Value None Syntax DirectoryString Example replica-base-dn: dc=example,dc=com replica-id This gives the replica ID (defined in the nsDS5ReplicaId attribute for the replica configuration entry) of the replica to be removed from the replication topology. Parameter Description Entry DN cn= task_name ,cn=cleanallruv,cn=tasks,cn=config Valid Values 0 to 65534 Default Value None Syntax Integer Example replica-id: 55 replica-force-cleaning This sets whether any outstanding updates from the replica to be removed should be applied ( no ) or whether the clean RUV operation should force-continue and lose any remaining updates ( yes ). Parameter Description Entry DN cn= task_name ,cn=cleanallruv,cn=tasks,cn=config Valid Values no | yes Default Value None Syntax DirectoryString Example replica-force-cleaning: no 3.1.16.13. cn=abort cleanallruv The Section 3.1.16.12, "cn=cleanallruv" task can take several minutes to propagate among all servers in the replication topology, even longer if the task processes all updates first. For performance or other maintenance considerations, it is possible to terminate a clean RUV task, and that termination is also propagated across all servers in the replication topology. The termination task is an isntance of the cn=abort cleanallruv entry. When the task is complete, the task entry is removed from the directory. The cn=abort cleanallruv entry is a container entry for all clean RUV operations. The cn=abort cleanallruv entry itself has no attributes related to any individual task, but each of the task entries beneath this entry, such as cn= task_ID , cn=abort cleanallruv , cn=tasks , cn=config , uses its attributes to define the individual update task. Each clean RUV task must specify the replica ID number of the replica RUV entries to which are currently being removed , the based DN of the replicated database, and whether the terminate task should complete when it has completed on all servers in the topology or just locally. replica-base-dn This gives the Directory Server base DN associated with the replicated database. This is the base DN for the replicated suffix. Parameter Description Entry DN cn= task_name ,cn=abort cleanallruv,cn=tasks,cn=config Valid Values Directory suffix DN Default Value None Syntax DirectoryString Example replica-base-dn: dc=example,dc=com replica-id This gives the replica ID (defined in the nsDS5ReplicaId attribute for the replica configuration entry) of the replica in the process of being removed from the replication topology. Parameter Description Entry DN cn= task_name ,cn=abort cleanallruv,cn=tasks,cn=config Valid Values 0 to 65534 Default Value None Syntax Integer Example replica-id: 55 replica-certify-all This sets whether the task should complete successfully on all servers in the replication topology before completing the task locally ( yes ) or whether the task should show complete as soon as it completes locally ( no ). Parameter Description Entry DN cn= task_name ,cn=abort cleanallruv,cn=tasks,cn=config Valid Values no | yes Default Value None Syntax DirectoryString Example replica-certify-all: yes 3.1.16.14. cn=automember rebuild membership The Auto Member Plug-in only runs when new entries are added to the directory. The plug-in ignores existing entries or entries which are edited to match an automembership rule. The cn=automember rebuild membership task runs the current automembership rules against existing entries to update or rebuild group membership. All configured automembership rules are run against the identified entries (though not all rules may apply to a given entry). basedn This gives the Directory Server base DN to use to search for user entries. The entries in the specified DN are then updated according to the automembership rules. Parameter Description Entry DN cn= task_name ,cn=automember rebuild membership,cn=tasks,cn=config Valid Values Directory suffix DN Default Value None Syntax DirectoryString Example basedn: dc=example,dc=com filter This attribute gives an LDAP filter to use to identify which user entries to update according to the configured automembership rules. Parameter Description Entry DN cn= task_name ,cn=automember rebuild membership,cn=tasks,cn=config Valid Values Any LDAP filter Default Value None Syntax DirectoryString Example filter: (uid=*) scope This attribute gives an LDAP search scope to use when searching the given base DN. Parameter Description Entry DN cn= task_name ,cn=automember rebuild membership,cn=tasks,cn=config Valid Values sub | base | one Default Value None Syntax DirectoryString Example scope: sub 3.1.16.15. cn=automember export updates This task runs against existing entries in the directory and exports the results of what users would have been added to what groups, based on the rules. This is useful for testing existing rules against existing users to see how your real deployment are performing. The automembership-related changes are not executed. The proposed changes are written to a specified LDIF file. basedn This gives the Directory Server base DN to use to search for user entries. A test-run of the automembership rules will be run against the identified entries. Parameter Description Entry DN cn= task_name ,cn=automember export updates,cn=tasks,cn=config Valid Values Directory suffix DN Default Value None Syntax DirectoryString Example basedn: dc=example,dc=com filter This attribute gives an LDAP filter to use to identify which user entries to test-run the automembership rules. Parameter Description Entry DN cn= task_name ,cn=automember export updates,cn=tasks,cn=config Valid Values Any LDAP filter Default Value None Syntax DirectoryString Example filter: (uid=*) scope This attribute gives an LDAP search scope to use when searching the given base DN. Parameter Description Entry DN cn= task_name ,cn=automember export updates,cn=tasks,cn=config Valid Values sub | base | one Default Value None Syntax DirectoryString Example scope: sub ldif This attribute sets the full path and filename of an LDIF file to which to write the proposed changes from the test-run of the automembership rules. This file must be local to the system from which the task is initiated. Parameter Description Entry DN cn= task_name ,cn=automember export updates,cn=tasks,cn=config Valid Values Local path and filename Default Value None Syntax DirectoryString Example ldif: /tmp/automember-results.ldif 3.1.16.16. cn=automember map updates This task runs against entries within an LDIF file (new entries or, potentially, test entries) and then writes the proposed changes to those user entries to an LDIF file. This can be very useful for testing a new rule, before applying it to (real) new or existing user entries. The automembership-related changes are not executed. The proposed changes are written to a specified LDIF file. ldif_in This attribute sets the full path and filename of an LDIF file from which to import entries to test with the configured automembership rules. These entries are not imported into the directory and the changes are not performed. The entries are loaded and used by the test-run only. This file must be local to the system from which the task is initiated. Parameter Description Entry DN cn= task_name ,cn=automember map updates,cn=tasks,cn=config Valid Values Local path and filename Default Value None Syntax DirectoryString Example ldif_in: /tmp/automember-test-users.ldif ldif_out This attribute sets the full path and filename of an LDIF file to which to write the proposed changes from the test-run of the automembership rules. This file must be local to the system from which the task is initiated. Parameter Description Entry DN cn= task_name ,cn=automember map updates,cn=tasks,cn=config Valid Values Local path and filename Default Value None Syntax DirectoryString Example ldif_out: /tmp/automember-results.ldif 3.1.16.17. cn=des2aes This task searches for all reversible password entries in the specified user database which are encoded using the outdated DES cipher, and converts them to the more secure AES cipher. Previously, this task was being performed automatically on all suffixes during Directory Server startup. However, since the search for DES passwords was typically unindexed, it could take a very long time to perform on suffixes containing large amounts of entries, which in turn caused Directory Server to time out and fail to start. For that reason, the search is now performed only on cn=config , but to convert passwords in any other database you must run this task manually. suffix This multivalued attribute specifies a suffix to check for DES passwords and convert them to AES. If this attribute is omitted then all the back ends/suffixes are checked. Parameter Description Entry DN cn= task_name ,cn=des2aes,cn=tasks,cn=config Valid Values Directory suffix DN Default Value None Syntax DirectoryString Example suffix: dc=example,dc=com 3.1.17. cn=uniqueid generator The unique ID generator configuration attributes are stored under cn=uniqueid generator,cn=config . The cn=uniqueid generator entry is an instance of the extensibleObject object class. nsstate This attribute saves the state of the unique ID generator across server restarts. This attribute is maintained by the server. Do not edit it. Parameter Description Entry DN cn=uniqueid generator,cn=config Valid Values Default Value Syntax DirectoryString Example nsstate: AbId0c3oMIDUntiLCyYNGgAAAAAAAAAA 3.1.18. Root DSE Configuration Parameters 3.1.18.1. nsslapd-return-default-opattr Directory Server does not display the operational attributes in Root DSE searches. For example, if you are running the ldapsearch utility with the -s base -b "" parameters, only the user attributes are displayed. For clients expecting operational attributes in Root DSE search output, you can enable this behavior to provide backward compatibility: Stop the Directory Server instance. Edit the /etc/dirsrv/slapd- instance_name /dse.ldif file and add the following parameters to the dn: section: Start the Directory Server instance. Parameter Description Entry DN Root DSE Valid Values supportedsaslmechanisms | nsBackendSuffix | subschemasubentry | supportedldapversion | supportedcontrol | ref | vendorname | vendorVersion Default Value Syntax DirectoryString Example nsslapd-return-default-opattr: supportedsaslmechanisms 3.2. Configuration Object Classes Many configuration entries simply use the extensibleObject object class, but some require other object classes. These configuration object classes are listed here. 3.2.1. changeLogEntry (Object Class) This object class is used for entries which store changes made to the Directory Server entries. To configure Directory Server to maintain a changelog that is compatible with the changelog implemented in Directory Server 4.1x, enable the Retro Changelog Plug-in. Each entry in the changelog has the changeLogEntry object class. This object class is defined in Changelog Internet Draft. Superior Class top OID 2.16.840.1.113730.3.2.1 Table 3.8. Required Attributes objectClass Defines the object classes for the entry. Section 3.1.3.3, "changeNumber" Contains a number assigned arbitrarily to the changelog. Section 3.1.3.4, "changeTime" The time at which a change took place. Section 3.1.3.5, "changeType" The type of change performed on an entry. Section 3.1.3.10, "targetDn" The distinguished name of an entry added, modified or deleted on a supplier server. Table 3.9. Allowed Attributes Section 3.1.3.1, "changes" Changes made to the Directory Server. Section 3.1.3.6, "deleteOldRdn" A flag that defines whether the old Relative Distinguished Name (RDN) of the entry should be kept as a distinguished attribute of the entry or should be deleted. Section 3.1.3.8, "newRdn" New RDN of an entry that is the target of a modRDN or modDN operation. Section 3.1.3.9, "newSuperior" Name of the entry that becomes the immediate superior of the existing entry when processing a modDN operation. 3.2.2. directoryServerFeature (Object Class) This object class is used specifically for entries which identify a feature of the directory service. This object class is defined by Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.40 Table 3.10. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. Table 3.11. Allowed Attributes Attribute Definition cn Specifies the common name of the entry. multiLineDescription Gives a text description of the entry. oid Specifies the OID of the feature. 3.2.3. nsBackendInstance (Object Class) This object class is used for the Directory Server back end, or database, instance entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.109 Table 3.12. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. 3.2.4. nsChangelog4Config (Object Class) In order for Directory Server 11.3 to replicate between Directory Server 4.x servers, the Directory Server 11.3 instance must have a special changelog configured. This object class defines the configuration for the retro changelog. This object class is defined for the Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.82 Table 3.13. Allowed Attributes Attribute Definition cn (common Name) Gives the common name of the entry. 3.2.5. nsDS5Replica (Object Class) This object class is for entries which define a replica in database replication. Many of these attributes are set within the back end and cannot be modified. Information on the attributes for this object class are listed with the core configuration attributes in chapter 2 of the Directory Server Configuration, Command, and File Reference . This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.108 Table 3.14. Required Attributes objectClass Defines the object classes for the entry. nsDS5ReplicaId Specifies the unique ID for suppliers in a replication environment. nsDS5ReplicaRoot Specifies the suffix DN at the root of a replicated area. Table 3.15. Allowed Attributes cn Gives the name for the replica. nsDS5Flags Specifies information that has been previously set in flags. nsDS5ReplicaAutoReferral Sets whether the server will follow configured referrals for the Directory Server database. nsDS5ReplicaBindDN Specifies the DN to use when a supplier server binds to a consumer. nsDS5ReplicaChangeCount Gives the total number of entries in the changelog and whether they have been replicated. nsDS5ReplicaLegacyConsumer Specifies whether the replica is a legacy consumer. nsDS5ReplicaName Specifies the unique ID for the replica for internal operations. nsDS5ReplicaPurgeDelay Specifies the time in seconds before the changelog is purged. nsDS5ReplicaReferral Specifies the URLs for user-defined referrals. nsDS5ReplicaReleaseTimeout Specifies a timeout after which a supplier will release a replica, whether or not it has finished sending its updates. nsDS5ReplicaTombstonePurgeInterval Specifies the time interval in seconds between purge operation cycles. nsDS5ReplicaType Defines the type of replica, such as a read-only consumer. nsDS5Task Launches a replication task, such as dumping the database contents to LDIF; this is used internally by the Directory Server supplier. nsState Stores information on the clock so that proper change sequence numbers are generated. 3.2.6. nsDS5ReplicationAgreement (Object Class) Entries with the nsDS5ReplicationAgreement object class store the information set in a replication agreement. Information on the attributes for this object class are in chapter 2 of the Directory Server Configuration, Command, and File Reference . This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.103 Table 3.16. Required Attributes objectClass Defines the object classes for the entry. cn Used for naming the replication agreement. Table 3.17. Allowed Attributes description Contains a free text description of the replication agreement. nsDS5BeginReplicaRefresh Initializes a replica manually. nsds5debugreplicatimeout Gives an alternate timeout period to use when the replication is run with debug logging. nsDS5ReplicaBindDN Specifies the DN to use when a supplier server binds to a consumer. nsDS5ReplicaBindMethod Specifies the method (SSL or simple authentication) to use for binding. nsDS5ReplicaBusyWaitTime Specifies the amount of time in seconds a supplier should wait after a consumer sends back a busy response before making another attempt to acquire access. nsDS5ReplicaChangesSentSinceStartup The number of changes sent to this replica since the server started. nsDS5ReplicaCredentials Specifies the password for the bind DN. nsDS5ReplicaHost Specifies the host name for the consumer replica. nsDS5ReplicaLastInitEnd States when the initialization of the consumer replica ended. nsDS5ReplicaLastInitStart States when the initialization of the consumer replica started. nsDS5ReplicaLastInitStatus The status for the initialization of the consumer. nsDS5ReplicaLastUpdateEnd States when the most recent replication schedule update ended. nsDS5ReplicaLastUpdateStart States when the most recent replication schedule update started. nsDS5ReplicaLastUpdateStatus Provides the status for the most recent replication schedule updates. nsDS5ReplicaPort Specifies the port number for the remote replica. nsDS5ReplicaRoot Specifies the suffix DN at the root of a replicated area. nsDS5ReplicaSessionPauseTime Specifies the amount of time in seconds a supplier should wait between update sessions. nsDS5ReplicatedAttributeList Specifies any attributes that will not be replicated to a consumer server. nsDS5ReplicaTimeout Specifies the number of seconds outbound LDAP operations will wait for a response from the remote replica before timing out and failing. nsDS5ReplicaTransportInfo Specifies the type of transport used for transporting data to and from the replica. nsDS5ReplicaUpdateInProgress States whether a replication schedule update is in progress. nsDS5ReplicaUpdateSchedule Specifies the replication schedule. nsDS50ruv Manages the internal state of the replica using the replication update vector. nsruvReplicaLastModified Contains the most recent time that an entry in the replica was modified and the changelog was updated. nsds5ReplicaStripAttrs With fractional replication, an update to an excluded attribute still triggers a replication event, but that event is empty. This attribute sets attributes to strip from the replication update. This prevents changes to attributes like internalModifyTimestamp from triggering an empty replication update. 3.2.7. nsDSWindowsReplicationAgreement (Object Class) Stores the synchronization attributes that concern the synchronization agreement. Information on the attributes for this object class are in chapter 2 of the Red Hat Directory Server Configuration, Command, and File Reference . This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.503 Table 3.18. Required Attributes objectClass Defines the object classes for the entry. cn Gives the name of the synchronization agreement. Table 3.19. Allowed Attributes description Contains a text description of the synchronization agreement. nsDS5BeginReplicaRefresh Initiates a manual synchronization. nsds5debugreplicatimeout Gives an alternate timeout period to use when the synchronization is run with debug logging. nsDS5ReplicaBindDN Specifies the DN to use when the Directory Server binds to the Windows server. nsDS5ReplicaBindMethod Specifies the method (SSL or simple authentication) to use for binding. nsDS5ReplicaBusyWaitTime Specifies the amount of time in seconds the Directory Server should wait after the Windows server sends back a busy response before making another attempt to acquire access. nsDS5ReplicaChangesSentSinceStartup Shows the number of changes sent since the Directory Server started. nsDS5ReplicaCredentials Specifies the credentials for the bind DN. nsDS5ReplicaHost Specifies the host name for the Windows domain controller of the Windows server being synchronized. nsDS5ReplicaLastInitEnd States when the last total update (resynchronization) of the Windows server ended. nsDS5ReplicaLastInitStart States when the last total update (resynchronization) of the Windows server started. nsDS5ReplicaLastInitStatus The status for the total update (resynchronization) of the Windows server. nsDS5ReplicaLastUpdateEnd States when the most recent update ended. nsDS5ReplicaLastUpdateStart States when the most recent update started. nsDS5ReplicaLastUpdateStatus Provides the status for the most recent updates. nsDS5ReplicaPort Specifies the port number for the Windows server. nsDS5ReplicaRoot Specifies the root suffix DN of the Directory Server. nsDS5ReplicaSessionPauseTime Specifies the amount of time in seconds the Directory Server should wait between update sessions. nsDS5ReplicaTimeout Specifies the number of seconds outbound LDAP operations will wait for a response from the Windows server before timing out and failing. nsDS5ReplicaTransportInfo Specifies the type of transport used for transporting data to and from the Windows server. nsDS5ReplicaUpdateInProgress States whether an update is in progress. nsDS5ReplicaUpdateSchedule Specifies the synchronization schedule. nsDS50ruv Manages the internal state of the Directory Server sync peer using the replication update vector (RUV). nsds7DirectoryReplicaSubtree Specifies the Directory Server suffix (root or sub) that is synced. nsds7DirsyncCookie Contains a cookie set by the sync service that functions as an RUV. nsds7NewWinGroupSyncEnabled Specifies whether new Windows group accounts are automatically created on the Directory Server. nsds7NewWinUserSyncEnabled Specifies whether new Windows user accounts are automatically created on the Directory Server. nsds7WindowsDomain Identifies the Windows domain being synchronized; analogous to nsDS5ReplicaHost in a replication agreement. nsds7WindowsReplicaSubtree Specifies the Windows server suffix (root or sub) that is synced. nsruvReplicaLastModified Contains the most recent time that an entry in the Directory Server sync peer was modified and the changelog was updated. winSyncInterval Sets how frequently, in seconds, the Directory Server polls the Windows server for updates to write over. If this is not set, the default is 300 , which is 300 seconds or five (5) minutes. winSyncMoveAction Sets how the sync plug-in handles corresponding entries that are discovered in Active Directory outside of the synced subtree. The sync process can ignore these entries (none, the default) or it can assume that the entries were moved intentionally to remove them from synchronization, and it can then either delete the corresponding Directory Server entry (delete) or remove the synchronization attributes and no longer sync the entry (unsync). 3.2.8. nsEncryptionConfig The nsEncryptionConfig object class stores the configuration information for allowed encryption options, such as protocols and cipher suites. This is defined in the Administrative Services. Superior Class top OID nsEncryptionConfig-oid Table 3.20. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn (commonName) Gives the common name of the device. Table 3.21. Allowed Attributes Attribute Definition nsSSL3SessionTimeout Sets the timeout period for an SSLv3 cipher session. nsSSLClientAuth Sets how the server handles client authentication. There are three possible values: allow, disallow, or require. nsSSLSessionTimeout Sets the timeout period for a cipher session. nsSSLSupportedCiphers Contains a list of all ciphers available to be used with secure connections to the server. nsTLS1 Sets whether TLS version 1 is enabled for the server. 3.2.9. nsEncryptionModule The nsEncryptionModule object class stores the encryption module information. This is defined in the Administrative Services. Superior Class top OID nsEncryptionModule-oid Table 3.22. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn (commonName) Gives the common name of the device. Table 3.23. Allowed Attributes Attribute Definition nsSSLActivation Sets whether to enable a cipher family. nsSSLPersonalitySSL Contains the name of the certificate used by the server for SSL. nsSSLToken Identifies the security token used by the server. 3.2.10. nsMappingTree (Object Class) A mapping tree maps a suffix to the back end. Each mapping tree entry uses the nsMappingTree object class. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.110 Table 3.24. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. cn Gives the common name of the entry. 3.2.11. nsSaslMapping (Object Class) This object class is used for entries which contain an identity mapping configuration for mapping SASL attributes to the Directory Server attributes. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.317 Table 3.25. Required Attributes objectClass Defines the object classes for the entry. cn Gives the name of the SASL mapping entry. Section 3.1.13.1, "nsSaslMapBaseDNTemplate" Contains the search base DN template. Section 3.1.13.2, "nsSaslMapFilterTemplate" Contains the search filter template. Section 3.1.13.4, "nsSaslMapRegexString" Contains a regular expression to match SASL identity strings. 3.2.12. nsslapdConfig (Object Class) The nsslapdConfig object class defines the configuration object, cn=config , for the Directory Server instance. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.39 Table 3.26. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. Table 3.27. Allowed Attributes Attribute Definition cn Gives the common name of the entry. 3.2.13. passwordPolicy (Object Class) Both local and global password policies take the passwordPolicy object class. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.13 Table 3.28. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. Table 3.29. Allowed Attributes Attribute Definition Section 3.1.1.192, "passwordMaxAge (Password Maximum Age)" Sets the number of seconds after which user passwords expire. Section 3.1.1.182, "passwordExp (Password Expiration)" Identifies whether the user's password expires after an interval given by the passwordMaxAge attribute. Section 3.1.1.204, "passwordMinLength (Password Minimum Length)" Sets the minimum number of characters that must be used in passwords. Section 3.1.1.187, "passwordInHistory (Number of Passwords to Remember)" Sets the number of passwords the directory stores in the history. Section 3.1.1.179, "passwordChange (Password Change)" Identifies whether or not users is allowed to change their own password. Section 3.1.1.220, "passwordWarning (Send Warning)" Sets the number of seconds before a warning message is sent to users whose password is about to expire. Section 3.1.1.190, "passwordLockout (Account Lockout)" Identifies whether or not users are locked out of the directory after a given number of failed bind attempts. Section 3.1.1.195, "passwordMaxFailure (Maximum Password Failures)" Sets the number of failed bind attempts after which a user will be locked out of the directory. Section 3.1.1.219, "passwordUnlock (Unlock Account)" Identifies whether a user is locked out until the password is reset by an administrator or whether the user can log in again after a given lockout duration. The default is to allow a user to log back in after the lockout period. Section 3.1.1.191, "passwordLockoutDuration (Lockout Duration)" Sets the time, in seconds, that users will be locked out of the directory. Section 3.1.1.180, "passwordCheckSyntax (Check Password Syntax)" Identifies whether the password syntax is checked by the server before the password is saved. Section 3.1.1.209, "passwordMustChange (Password Must Change)" Identifies whether or not to change their passwords when they first login to the directory or after the password is reset by the Directory Manager. Section 3.1.1.214, "passwordStorageScheme (Password Storage Scheme)" Sets the type of encryption used to store Directory Server passwords. Section 3.1.1.200, "passwordMinAge (Password Minimum Age)" Sets the number of seconds that must pass before a user can change their password. Section 3.1.1.211, "passwordResetFailureCount (Reset Password Failure Count After)" Sets the time, in seconds, after which the password failure counter will be reset. Each time an invalid password is sent from the user's account, the password failure counter is incremented. Section 3.1.1.185, "passwordGraceLimit (Password Expiration)" Sets the number of grace logins permitted when a user's password is expired. Section 3.1.1.203, "PasswordMinDigits (Password Syntax)" Sets the minimum number of numeric characters (0 through 9) which must be used in the password. Section 3.1.1.201, "passwordMinAlphas (Password Syntax)" Sets the minimum number of alphabetic chracters that must be used in the password. Section 3.1.1.208, "PasswordMinUppers (Password Syntax)" Sets the minimum number of upper case alphabetic characters, A to Z, which must be used in the password. Section 3.1.1.205, "PasswordMinLowers (Password Syntax)" Sets the minimum number of lower case alphabetic characters, a to z, which must be used in the password. Section 3.1.1.206, "PasswordMinSpecials (Password Syntax)" Sets the minimum number of special ASCII characters, such as !@#USD. , which must be used in the password. Section 3.1.1.199, "passwordMin8Bit (Password Syntax)" Sets the minimum number of 8-bit chracters used in the password. Section 3.1.1.196, "passwordMaxRepeats (Password Syntax)" Sets the maximum number of times that the same character can be used in row. Section 3.1.1.202, "passwordMinCategories (Password Syntax)" Sets the minimum number of categories which must be used in the password. Section 3.1.1.207, "PasswordMinTokenLength (Password Syntax)" Sets the length to check for trivial words. Section 3.1.1.216, "passwordTPRDelayValidFrom" Sets a delay when temporary passwords become valid. Section 3.1.1.215, "passwordTPRDelayExpireAt" Sets the number of seconds a temporary password is valid. Section 3.1.1.217, "passwordTPRMaxUse" Sets the maximum number off attempts a temporary password can be used 3.3. Root DSE Attributes The attributes in this section are used to define the root directory server entry (DSE) for the server instance. The information defined in the DSE relates to the actual configuration of the server instance, such as the controls, mechanisms, or features supported in that version of the server software. It also contains information specific to the instance, like its build number and installation date. The DSE is a special entry, outside the normal DIT, and can be returned by searching with a null search base. For example: 3.3.1. dataversion This attribute contains a timestamp which shows the most recent edit time for any data in the directory. OID Syntax GeneralizedTime Multi- or Single-Valued Single-valued Defined in Directory Server 3.3.2. defaultNamingContext Corresponds to the naming context, out of all configured naming contexts, which clients should use by default. OID Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 3.3.3. lastusn The USN Plug-in assigns a sequence number to every entry whenever a write operation - add, modify, delete, and modrdn - is performed for that entry. The USN is assigned in the entryUSN operational attribute for the entry. The USN Plug-in has two modes: local and global. In local mode, each database maintained for a server instance has its own instance of the USN Plug-in with a separate USN counter per back end database. The most recent USN assigned for any entry in the database is displayed in the lastusn attribute. When the USN Plug-in is set to local mode, the lastUSN attribute shows both the database which assigned the USN and the USN: For example: In global mode, when the database uses a shared USN counter, the lastUSN value shows the latest USN assigned by any database: Note This attribute does not count internal server operations. Only normal write operations in the back end database - add, modify, delete, and modrdn - increment the USN count. Syntax Integer Multi- or Single-Valued Multi-valued Defined in Directory Server 3.3.4. namingContexts Corresponds to a naming context the server is controlling or shadowing. When the Directory Server does not control any information (such as when it is an LDAP gateway to a public X.500 directory), this attribute is absent. When the Directory Server believes it contains the entire directory, the attribute has a single value, and that value is the empty string (indicating the null DN of the root).This attribute permits a client contacting a server to choose suitable base objects for searching. OID 1.3.6.1.4.1.1466.101.120.5 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2252 3.3.5. netscapemdsuffix This attribute contains the DN for the top suffix of the directory tree for machine data maintained in the server. The DN itself points to an LDAP URL. For example: OID 2.16.840.1.113730.3.1.212 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 3.3.6. supportedControl The values of this attribute are the object identifiers (OIDs) that identify the controls supported by the server. When the server does not support controls, this attribute is absent. OID 1.3.6.1.4.1.1466.101.120.13 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 3.3.7. supportedExtension The values of this attribute are the object identifiers (OIDs) that identify the extended operations supported by the server. When the server does not support extended operations, this attribute is absent. OID 1.3.6.1.4.1.1466.101.120.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 3.3.8. supportedFeatures This attribute contains features supported by the current version of Red Hat Directory Server. OID 1.3.6.1.4.1.4203.1.3.5 Syntax OID Multi- or Single-Valued Multi-valued Defined in RFC 3674 3.3.9. supportedLDAPVersion This attribute identifies the versions of the LDAP protocol implemented by the server. OID 1.3.6.1.4.1.1466.101.120.15 Syntax Integer Multi- or Single-Valued Multi-valued Defined in RFC 2252 3.3.10. supportedSASLMechanisms This attribute identifies the names of the SASL mechanisms supported by the server. When the server does not support SASL attributes, this attribute is absent. OID 1.3.6.1.4.1.1466.101.120.14 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 3.3.11. vendorName This attribute contains the name of the server vendor. OID 1.3.6.1.1.4 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 3045 3.3.12. vendorVersion This attribute shows the vendor's version number for the server. OID 1.3.6.1.1.5 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 3045 3.4. Legacy Attributes The attributes were standard with Directory Server 4.x and older. This are still included with the schema for compatibility, but are not for current versions of the Directory Server. 3.4.1. Legacy Server Attributes These attributes were originally used to configure the server instance entries for Directory Server 4.x and older servers. 3.4.1.1. LDAPServer (Object Class) This object class identifies the LDAP server information. It is defined by Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.35 Table 3.30. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. cn Specifies the common name of the entry. Table 3.31. Allowed Attributes Attribute Definition description Gives a text description of the entry. l (localityName) Gives the city or geographical location of the entry. ou (organizationalUnitName) Gives the organizational unit or division to which the account belongs. seeAlso Contains a URL to another entry or site with related information. generation Store the server generation string. changeLogMaximumAge Specifies changelog maximum age. changeLogMaximumSize Specifies maximum changelog size. 3.4.1.2. changeLogMaximumAge This sets the maximum age for the changelog maintained by the server. OID 2.16.840.1.113730.3.1.200 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 3.4.1.3. changeLogMaximumConcurrentWrites This attribute sets the maximum number of concurrent writes that can be written to the changelog. OID 2.16.840.1.113730.3.1.205 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 3.4.1.4. changeLogMaximumSize This attribute sets the maximum size for the changelog. OID 2.16.840.1.113730.3.1.201 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 3.4.1.5. generation This attribute contains a byte vector that uniquely identifies that specific server and version. This number distinguishes between servers during replication. OID 2.16.840.1.113730.3.1.612 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 3.4.1.6. nsSynchUniqueAttribute This attribute is used for Windows synchonization. OID 2.16.840.1.113730.3.1.407 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 3.4.1.7. nsSynchUserIDFormat This attribute is used for Windows synchonization. OID 2.16.840.1.113730.3.1.406 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server | [
"time: 20221027102743 dn: uid=73747737483,ou=people,dc=example,dc=com #cn: Frank Lee result: 0 changetype: modify replace: description description: Adds cn attribute to the audit log - replace: modifiersname modifiersname: cn=dm - replace: modifytimestamp modifytimestamp: 20221027142743Z",
"[time_stamp] conn=5 op=-1 fd=64 Disconnect - Protocol error - Unknown Proxy - P4",
"Not listening for new connections -- too many fds open",
"dn: cn=my_group,ou=groups,dc=example,dc=com modifiersname: uid=jsmith,ou=people,dc=example,dc=com internalModifiersname: cn=referential integrity plugin,cn=plugins,cn=config",
"ou=People,dc=example,dc=com",
"ou=Groups,dc=example,dc=com",
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -b \"cn=config\" -s sub -x \"(objectclass=*)\" | grep nsslapd-requiresrestart",
"nsslapd-reservedescriptor = 20 + ( NldbmBackends * 4) + NglobalIndex + ReplicationDescriptor + ChainingBackendDescriptors + PTADescriptors + SSLDescriptors",
"ldapsearch -b <basedn> \"(filter)\" \"sn someothertext\" dn: <matched dn> sn someothertext: <sn>",
"[DATE] - SSL alert: ldap_sasl_bind(\"\",LDAP_SASL_EXTERNAL) 81 (Netscape runtime error -12276 - Unable to communicate securely with peer: requested domain name does not match the server's certificate.) [DATE] NSMMReplicationPlugin - agmt=\"cn=SSL Replication Agreement to host1\" (host1.example.com:636): Replication bind with SSL client authentication failed: LDAP error 81 (Can't contact LDAP server)",
"changeType: modify",
"dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config objectClass: top objectClass: directoryServerFeature oid: 2.16.840.1.113730.3.4.9 cn: VLV Request Control aci: (targetattr != \"aci\")(version 3.0; acl \"VLV Request Control\"; allow( read, search, compare, proxy ) userdn = \"ldap:///all\";) creatorsName: cn=server,cn=plugins,cn=config modifiersName: cn=server,cn=plugins,cn=config createTimestamp: 20200129132357Z modifyTimestamp: 20200129132357Z",
"nsds5debugreplicatimeout: seconds[:debuglevel]",
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -s sub -b dc=example,dc=com \"(|(objectclass=nsTombstone)(nsDS5ReplConflict=*))\" dn nsDS5ReplConflict nsUniqueID",
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -s sub -b dc=example,dc=com \"(|(objectclass=nsTombstone)(nsUniqueID=66a2b699-1dd211b2-807fa9c3-a58714648))\"",
"nsds5replicaChangesSentSinceStartup:: MToxLzAg",
"Total update flow control gives time (2000 msec) to the consumer before sending more entries [ msgid sent: xxx, rcv: yyy]) If total update fails you can try to increase nsds5ReplicaFlowControlPause and/or decrease nsds5ReplicaFlowControlWindow in the replica agreement configuration",
"connection: A:YYYYMMDDhhmmssZ:B:C:D:E:F:G:H:I:IP_address",
"connection: 69:20200604081953Z:6086:6086:-:cn=proxy,ou=special_users,dc=example,dc=test:0:11:27:7448846:ip=192.0.2.1",
"dn: cn= task_id ,cn= task_type ,cn=tasks,cn=config",
"dn: cn=example import,cn=import,cn=tasks,cn=config objectclass: extensibleObject cn: example import nsFilename: /home/files/example.ldif nsInstance: userRoot",
"nsFilename: file1.ldif nsFilename: file2.ldif",
"dn: cn=example backup,cn=backup,cn=tasks,cn=config objectclass: extensibleObject cn: example backup nsArchiveDir: /export/backups/ nsDatabaseType: ldbm database",
"dn: cn=example restore,cn=restore,cn=tasks,cn=config objectclass: extensibleObject cn: example restore nsArchiveDir: /export/backups/ nsDatabaseType: ldbm database",
"dn: cn=example presence index,cn=index,cn=tasks,cn=config objectclass: top objectclass: extensibleObject cn: example presence index nsInstance: userRoot nsIndexAttribute: cn:pres dn: cn=example VLV index,cn=index,cn=tasks,cn=config objectclass: extensibleObject cn: example VLV index nsIndexVLVAttribute: \"by MCC ou=people,dc=example,dc=com\"",
"nsIndexAttribute: attribute:index1,index2",
"dn: cn=example schema reload,cn=schema reload task,cn=tasks,cn=config objectclass: extensibleObject cn:example schema reload schemadir: /export/schema",
"dn: cn=example memberOf,cn=memberof task,cn=tasks,cn=config objectclass: extensibleObject cn:example memberOf basedn: ou=people,dc=example,dc=com filter: (objectclass=groupofnames)",
"dn: cn=example,cn=fixup linked attributes,cn=tasks,cn=config objectclass: extensibleObject cn:example linkdn: cn=Example Link,cn=Linked Attributes,cn=plugins,cn=config",
"linkdn: cn=Manager Attributes,cn=Linked Attributes,cn=plugins,cn=config",
"dn: cn=example,cn=syntax validate,cn=tasks,cn=config objectclass: extensibleObject cn:example basedn: ou=people,dc=example,dc=com filter: \"(objectclass=inetorgperson)\"",
"basedn: ou=people,dc=example,dc=com",
"filter: \"(objectclass=person)\"",
"dn: cn=example,cn=USN tombstone cleanup task,cn=tasks,cn=config objectclass: extensibleObject cn:example backend: userroot max_usn_to_delete: 500",
"ldap_add: DSA is unwilling to perform",
"[...] usn-plugin - Suffix dc=example,dc=com is replicated. Unwilling to perform cleaning up tombstones.",
"[09/Sep/2020:09:03:43 -0600] NSMMReplicationPlugin - ruv_compare_ruv: RUV [changelog max RUV] does not contain element [{replica 55 ldap://server.example.com:389} 4e6a27ca000000370000 4e6a27e8000000370000] which is present in RUV [database RUV] ... [09/Sep/2020:09:03:43 -0600] NSMMReplicationPlugin - replica_check_for_data_reload: Warning: for replica dc=example,dc=com there were some differences between the changelog max RUV and the database RUV. If there are obsolete elements in the database RUV, you should remove them using the CLEANRUV task. If they are not obsolete, you should check their status to see why there are no changes from those servers in the changelog.",
"dn: cn=clean 55,cn=cleanallruv,cn=tasks,cn=config objectclass: extensibleObject replica-base-dn: dc=example,dc=com replica-id: 55 replica-force-cleaning: no cn: clean 55",
"dn: cn=abort 55,cn=abort cleanallruv,cn=tasks,cn=config objectclass: extensibleObject replica-base-dn: dc=example,dc=com replica-id: 55 replica-certify-all: yes cn: abort 55",
"nsslapd-return-default-opattr: supportedsaslmechanisms nsslapd-return-default-opattr: nsBackendSuffix nsslapd-return-default-opattr: subschemasubentry nsslapd-return-default-opattr: supportedldapversion nsslapd-return-default-opattr: supportedcontrol nsslapd-return-default-opattr: ref nsslapd-return-default-opattr: vendorname nsslapd-return-default-opattr: vendorVersion nsslapd-return-default-opattr: supportedextension nsslapd-return-default-opattr: namingcontexts",
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -s base -b \"\" \"objectclass=*\"",
"dataversion: 020090923175302020090923175302",
"lastusn; database_name : USN",
"lastusn;example1: 213 lastusn;example2: 207",
"lastusn: 420",
"cn=ldap://dc= server_name ,dc=example,dc=com:389"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/Core_Server_Configuration_Reference |
Chapter 10. About Red Hat Process Automation Manager | Chapter 10. About Red Hat Process Automation Manager Red Hat Process Automation Manager is the Red Hat middleware platform for creating business automation applications and microservices. It enables enterprise business and IT users to document, simulate, manage, automate, and monitor business processes and policies. It is designed to empower business and IT users to collaborate more effectively, so business applications can be changed easily and quickly. The product is made up of Business Central and KIE Server. KIE Server is the server where rules and other artifacts are executed. It is used to instantiate and execute rules and solve planning problems. KIE Server provides the runtime environment for business assets and accesses the data stored in the assets repository (knowledge store). Business Central is the graphical user interface where you create and manage business rules that KIE Server executes. It enables you to perform the following tasks: Create, manage, and edit your rules, processes, and related assets. Manage connected KIE Server instances and their KIE containers (deployment units). Execute runtime operations against processes and tasks in KIE Server instances connected to Business Central. Business Central is also available as a standalone JAR file. You can use the Business Central standalone JAR file to run Business Central without needing to deploy it to an application server. Red Hat JBoss Enterprise Application Platform (Red Hat JBoss EAP) 7.4 is a certified implementation of the Java Enterprise Edition 8 (Java EE 8) full and web profile specifications. Red Hat JBoss EAP provides preconfigured options for features such as high availability, clustering, messaging, and distributed caching. It also enables users to write, deploy, and run applications using the various APIs and services that Red Hat JBoss EAP provides. The instructions in this document explain how to install Red Hat Process Automation Manager in a Red Hat JBoss EAP 7.4 server instance. For instructions on how to install Red Hat Process Automation Manager in other environments, see the following documents: Installing and configuring KIE Server on IBM WebSphere Application Server Installing and configuring KIE Server on Oracle WebLogic Server Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 3 using templates For information about supported components, see the following documents: What is the mapping between Red Hat Process Automation Manager and the Maven library version? Red Hat Process Automation Manager 7 Supported Configurations | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/installing-con_install-on-eap |
Getting started with Red Hat JBoss Enterprise Application Platform | Getting started with Red Hat JBoss Enterprise Application Platform Red Hat JBoss Enterprise Application Platform 8.0 Get up and running with Red Hat JBoss Enterprise Application Platform quickly. Learn administrative tasks such as basic installation, management, and configuration. Get started writing Jakarta EE applications by using the JBoss EAP quickstarts Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/getting_started_with_red_hat_jboss_enterprise_application_platform/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/firewall_rules_for_red_hat_openstack_platform/making-open-source-more-inclusive |
Chapter 1. OpenShift Container Platform security and compliance | Chapter 1. OpenShift Container Platform security and compliance 1.1. Security overview It is important to understand how to properly secure various aspects of your OpenShift Container Platform cluster. Container security A good starting point to understanding OpenShift Container Platform security is to review the concepts in Understanding container security . This and subsequent sections provide a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. These sections also include information on the following topics: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. Auditing OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. Administrators can configure the audit log policy and view audit logs . Certificates Certificates are used by various components to validate access to the cluster. Administrators can replace the default ingress certificate , add API server certificates , or add a service certificate . You can also review more details about the types of certificates used by the cluster: User-provided certificates for the API server Proxy certificates Service CA certificates Node certificates Bootstrap certificates etcd certificates OLM certificates Aggregated API client certificates Machine Config Operator certificates User-provided certificates for default ingress Ingress certificates Monitoring and cluster logging Operator component certificates Control plane certificates Encrypting data You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. Vulnerability scanning Administrators can use the Red Hat Quay Container Security Operator to run vulnerability scans and review information about detected vulnerabilities. 1.2. Compliance overview For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards, or the organization's corporate governance framework. Compliance checking Administrators can use the Compliance Operator to run compliance scans and recommend remediations for any issues found. The oc-compliance plugin is an OpenShift CLI ( oc ) plugin that provides a set of utilities to easily interact with the Compliance Operator. File integrity checking Administrators can use the File Integrity Operator to continually run file integrity checks on cluster nodes and provide a log of files that have been modified. 1.3. Additional resources Understanding authentication Configuring the internal OAuth server Understanding identity provider configuration Using RBAC to define and apply permissions Managing security context constraints | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_and_compliance/security-compliance-overview |
10.5. Customizing Desktop Backgrounds | 10.5. Customizing Desktop Backgrounds Using the dconf utility, you can configure the default background, add extra backgrounds, or add multiple backgrounds. If the users of the system will not be permitted to change these settings from the defaults, then system administrators need to lock the settings using the locks directory. Otherwise each user will be able to customize the setting to suit their own preferences. For more information, see Section 9.5.1, "Locking Down Specific Settings" . 10.5.1. Customizing the Default Desktop Background You can configure the default desktop background and its appearance by setting the relevant GSettings keys in the org.gnome.desktop.background schema. For more information about GSettings, see Chapter 9, Configuring Desktop with GSettings and dconf . Procedure 10.10. Setting the Default Background Create a local database for machine-wide settings in /etc/dconf/db/local.d/ 00-background : Override the user's setting to prevent the user from changing it in /etc/dconf/db/local.d/locks/background : For more information, see Section 9.5.1, "Locking Down Specific Settings" . Update the system databases: Users must log out and back in again before the system-wide settings take effect. 10.5.2. Adding Extra Backgrounds You can make extra backgrounds available to users on your system. Create a filename .xml file (there are no requirements for file names) specifying your extra background's appearance using the org.gnome.desktop.background schemas . Here is a list of the most frequently used schemas: Table 10.1. org.gnome.desktop.background schemas GSettings Keys Key Name Possible Values Description picture-options "none", "wallpaper", "centered", "scaled", "stretched", "zoom", "spanned" Determines how the image set by wallpaper_filename is rendered. color-shading-type "horizontal", "vertical", and "solid" How to shade the background color. primary-color default: #023c88 Left or Top color when drawing gradients, or the solid color. secondary-color default: #5789ca Right or Bottom color when drawing gradients, not used for solid color. The full range of options is to be found in the dconf-editor GUI or gsettings command-line utility. For more information, see Section 9.3, "Browsing GSettings Values for Desktop Applications" . Store the filename .xml file in the /usr/share/gnome-background-properties/ directory. When the user clicks their name in the top right corner, chooses Settings , and in the Personal section of the table selects Background , they will see the new background available. Look at the example and see how org.gnome.desktop.background GSettings keys are implemented practically: Example 10.4. Extra Backgrounds File <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE wallpapers SYSTEM "gnome-wp-list.dtd"> <wallpapers> <wallpaper deleted="false"> <name>Company Background</name> <name xml:lang="de">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> </wallpapers> In one configuration file, you can specify multiple <wallpaper> elements to add more backgrounds. See the following example which shows an .xml file with two <wallpaper> elements, adding two different backgrounds: Example 10.5. Extra Backgrounds File with Two Wallpaper Elements <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE wallpapers SYSTEM "gnome-wp-list.dtd"> <wallpapers> <wallpaper deleted="false"> <name>Company Background</name> <name xml:lang="de">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> <wallpaper deleted="false"> <name>Company Background 2</name> <name xml:lang="de">Firmenhintergrund 2</name> <filename>/usr/local/share/backgrounds/company-wallpaper-2.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ff0000</pcolor> <scolor>#00ffff</scolor> </wallpaper> </wallpapers> 10.5.3. Setting the Screen Shield Screen Shield is the screen that quickly slides down when the system is locked. It is controlled by the org.gnome.desktop.screensaver.picture-uri GSettings key. Since GDM uses its own dconf profile, you can set the default background by changing the settings in that profile. For more information about GSettings and dconf , see Chapter 9, Configuring Desktop with GSettings and dconf . Procedure 10.11. Adding a Logo to the Screen Shield Create a gdm database for machine-wide settings in /etc/dconf/db/gdm.d/ 01-screensaver : Replace /opt/corp/background.jpg with the path to the image file you want to use as the Screen Shield. Supported formats are PNG, JPG, JPEG, and TGA. The image will be scaled if necessary to fit the screen. Update the system databases: You must log out before the system-wide settings take effect. time you lock the screen, the new Screen Shield will show in the background. In the foreground, time, date and the current day of the week will be displayed. 10.5.3.1. What If the Screen Shield Does Not Update? Make sure that you have run the dconf update command as root to update the system databases. In case the background does not update, try restarting GDM . For more information, see Section 14.1.1, "Restarting GDM" . | [
"Specify the dconf path Specify the path to the desktop background image file picture-uri='file:///usr/local/share/backgrounds/wallpaper.jpg' Specify one of the rendering options for the background image: 'none', 'wallpaper', 'centered', 'scaled', 'stretched', 'zoom', 'spanned' picture-options='scaled' Specify the left or top color when drawing gradients or the solid color primary-color='000000' Specify the right or bottom color when drawing gradients secondary-color='FFFFFF'",
"List the keys used to configure the desktop background /org/gnome/desktop/background/picture-uri /org/gnome/desktop/background/picture-options /org/gnome/desktop/background/primary-color /org/gnome/desktop/background/secondary-color",
"dconf update",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <!DOCTYPE wallpapers SYSTEM \"gnome-wp-list.dtd\"> <wallpapers> <wallpaper deleted=\"false\"> <name>Company Background</name> <name xml:lang=\"de\">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> </wallpapers>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <!DOCTYPE wallpapers SYSTEM \"gnome-wp-list.dtd\"> <wallpapers> <wallpaper deleted=\"false\"> <name>Company Background</name> <name xml:lang=\"de\">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> <wallpaper deleted=\"false\"> <name>Company Background 2</name> <name xml:lang=\"de\">Firmenhintergrund 2</name> <filename>/usr/local/share/backgrounds/company-wallpaper-2.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ff0000</pcolor> <scolor>#00ffff</scolor> </wallpaper> </wallpapers>",
"[org/gnome/desktop/screensaver] picture-uri=' file:///opt/corp/background.jpg '",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/customize-desktop-backgrounds |
Chapter 22. Ceph Source | Chapter 22. Ceph Source Receive data from an Ceph Bucket, managed by a Object Storage Gateway. 22.1. Configuration Options The following table summarizes the configuration options available for the ceph-source Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key. string bucketName * Bucket Name The Ceph Bucket name. string cephUrl * Ceph Url Address Set the Ceph Object Storage Address Url. string "http://ceph-storage-address.com" secretKey * Secret Key The secret key. string zoneGroup * Bucket Zone Group The bucket zone group. string autoCreateBucket Autocreate Bucket Specifies to automatically create the bucket. boolean false delay Delay The number of milliseconds before the poll of the selected bucket. integer 500 deleteAfterRead Auto-delete Objects Specifies to delete objects after consuming them. boolean true ignoreBody Ignore Body If true, the Object body is ignored. Setting this to true overrides any behavior defined by the includeBody option. If false, the object is put in the body. boolean false includeBody Include Body If true, the exchange is consumed and put into the body and closed. If false, the Object stream is put raw into the body and the headers are set with the object metadata. boolean true prefix Prefix The bucket prefix to consider while searching. string "folder/" Note Fields marked with an asterisk (*) are mandatory. 22.2. Dependencies At runtime, the ceph-source Kamelet relies upon the presence of the following dependencies: camel:aws2-s3 camel:kamelet 22.3. Usage This section describes how you can use the ceph-source . 22.3.1. Knative Source You can use the ceph-source Kamelet as a Knative source by binding it to a Knative object. ceph-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 22.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 22.3.1.2. Procedure for using the cluster CLI Save the ceph-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f ceph-source-binding.yaml 22.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind ceph-source -p "source.accessKey=The Access Key" -p "source.bucketName=The Bucket Name" -p "source.cephUrl=http://ceph-storage-address.com" -p "source.secretKey=The Secret Key" -p "source.zoneGroup=The Bucket Zone Group" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 22.3.2. Kafka Source You can use the ceph-source Kamelet as a Kafka source by binding it to a Kafka topic. ceph-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: "The Access Key" bucketName: "The Bucket Name" cephUrl: "http://ceph-storage-address.com" secretKey: "The Secret Key" zoneGroup: "The Bucket Zone Group" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 22.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 22.3.2.2. Procedure for using the cluster CLI Save the ceph-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f ceph-source-binding.yaml 22.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind ceph-source -p "source.accessKey=The Access Key" -p "source.bucketName=The Bucket Name" -p "source.cephUrl=http://ceph-storage-address.com" -p "source.secretKey=The Secret Key" -p "source.zoneGroup=The Bucket Zone Group" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 22.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/ceph-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: \"The Access Key\" bucketName: \"The Bucket Name\" cephUrl: \"http://ceph-storage-address.com\" secretKey: \"The Secret Key\" zoneGroup: \"The Bucket Zone Group\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f ceph-source-binding.yaml",
"kamel bind ceph-source -p \"source.accessKey=The Access Key\" -p \"source.bucketName=The Bucket Name\" -p \"source.cephUrl=http://ceph-storage-address.com\" -p \"source.secretKey=The Secret Key\" -p \"source.zoneGroup=The Bucket Zone Group\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: ceph-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: ceph-source properties: accessKey: \"The Access Key\" bucketName: \"The Bucket Name\" cephUrl: \"http://ceph-storage-address.com\" secretKey: \"The Secret Key\" zoneGroup: \"The Bucket Zone Group\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f ceph-source-binding.yaml",
"kamel bind ceph-source -p \"source.accessKey=The Access Key\" -p \"source.bucketName=The Bucket Name\" -p \"source.cephUrl=http://ceph-storage-address.com\" -p \"source.secretKey=The Secret Key\" -p \"source.zoneGroup=The Bucket Zone Group\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/ceph-source |
Chapter 7. Using image streams with Kubernetes resources | Chapter 7. Using image streams with Kubernetes resources Image streams, being OpenShift Container Platform native resources, work with all native resources available in OpenShift Container Platform, such as Build or DeploymentConfigs resources. It is also possible to make them work with native Kubernetes resources, such as Job , ReplicationController , ReplicaSet or Kubernetes Deployment resources. 7.1. Enabling image streams with Kubernetes resources When using image streams with Kubernetes resources, you can only reference image streams that reside in the same project as the resource. The image stream reference must consist of a single segment value, for example ruby:2.5 , where ruby is the name of an image stream that has a tag named 2.5 and resides in the same project as the resource making the reference. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. There are two ways to enable image streams with Kubernetes resources: Enabling image stream resolution on a specific resource. This allows only this resource to use the image stream name in the image field. Enabling image stream resolution on an image stream. This allows all resources pointing to this image stream to use it in the image field. Procedure You can use oc set image-lookup to enable image stream resolution on a specific resource or image stream resolution on an image stream. To allow all resources to reference the image stream named mysql , enter the following command: USD oc set image-lookup mysql This sets the Imagestream.spec.lookupPolicy.local field to true. Imagestream with image lookup enabled apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true When enabled, the behavior is enabled for all tags within the image stream. Then you can query the image streams and see if the option is set: USD oc set image-lookup imagestream --list You can enable image lookup on a specific resource. To allow the Kubernetes deployment named mysql to use image streams, run the following command: USD oc set image-lookup deploy/mysql This sets the alpha.image.policy.openshift.io/resolve-names annotation on the deployment. Deployment with image lookup enabled apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql You can disable image lookup. To disable image lookup, pass --enabled=false : USD oc set image-lookup deploy/mysql --enabled=false | [
"oc set image-lookup mysql",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true",
"oc set image-lookup imagestream --list",
"oc set image-lookup deploy/mysql",
"apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql",
"oc set image-lookup deploy/mysql --enabled=false"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/images/using-imagestreams-with-kube-resources |
Chapter 5. Scaling a user-provisioned cluster with the Bare Metal Operator | Chapter 5. Scaling a user-provisioned cluster with the Bare Metal Operator After deploying a user-provisioned infrastructure cluster, you can use the Bare Metal Operator (BMO) and other metal 3 components to scale bare-metal hosts in the cluster. This approach helps you to scale a user-provisioned cluster in a more automated way. 5.1. About scaling a user-provisioned cluster with the Bare Metal Operator You can scale user-provisioned infrastructure clusters by using the Bare Metal Operator (BMO) and other metal 3 components. User-provisioned infrastructure installations do not feature the Machine API Operator. The Machine API Operator typically manages the lifecycle of bare-metal nodes in a cluster. However, it is possible to use the BMO and other metal 3 components to scale nodes in user-provisioned clusters without requiring the Machine API Operator. 5.1.1. Prerequisites for scaling a user-provisioned cluster You installed a user-provisioned infrastructure cluster on bare metal. You have baseboard management controller (BMC) access to the hosts. 5.1.2. Limitations for scaling a user-provisioned cluster You cannot use a provisioning network to scale user-provisioned infrastructure clusters by using the Bare Metal Operator (BMO). Consequentially, you can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia . You cannot scale MachineSet objects in user-provisioned infrastructure clusters by using the BMO. 5.2. Configuring a provisioning resource to scale user-provisioned clusters Create a Provisioning custom resource (CR) to enable Metal platform components on a user-provisioned infrastructure cluster. Prerequisites You installed a user-provisioned infrastructure cluster on bare metal. Procedure Create a Provisioning CR. Save the following YAML in the provisioning.yaml file: apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: "Disabled" watchAllNamespaces: false Note OpenShift Container Platform 4.16 does not support enabling a provisioning network when you scale a user-provisioned cluster by using the Bare Metal Operator. Create the Provisioning CR by running the following command: USD oc create -f provisioning.yaml Example output provisioning.metal3.io/provisioning-configuration created Verification Verify that the provisioning service is running by running the following command: USD oc get pods -n openshift-machine-api Example output NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h 5.3. Provisioning new hosts in a user-provisioned cluster by using the BMO You can use the Bare Metal Operator (BMO) to provision bare-metal hosts in a user-provisioned cluster by creating a BareMetalHost custom resource (CR). Note Provisioning bare-metal hosts to the cluster by using the BMO sets the spec.externallyProvisioned specification in the BareMetalHost custom resource to false by default. Do not set the spec.externallyProvisioned specification to true , because this setting results in unexpected behavior. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Create a configuration file for the bare-metal node. Depending if you use either a static configuration or a DHCP server, choose one of the following example bmh.yaml files and configure it to your needs by replacing values in the YAML to match your environment: To deploy with a static configuration, create the following bmh.yaml file: --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 7 -hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 1 Replace all instances of <num> with a unique compute node number for the bare-metal nodes in the name , credentialsName , and preprovisioningNetworkDataName fields. 2 Add the NMState YAML syntax to configure the host interfaces. To configure the network interface for a newly created node, specify the name of the secret that has the network configuration. Follow the nmstate syntax to define the network configuration for your node. See "Preparing the bare-metal node" for details on configuring NMState syntax. 3 Optional: If you have configured the network interface with nmstate , and you want to disable an interface, set state: up with the IP addresses set to enabled: false . 4 Replace <nic1_name> with the name of the bare-metal node's first network interface controller (NIC). 5 Replace <ip_address> with the IP address of the bare-metal node's NIC. 6 Replace <dns_ip_address> with the IP address of the bare-metal node's DNS resolver. 7 Replace <next_hop_ip_address> with the IP address of the bare-metal node's external gateway. 8 Replace <next_hop_nic1_name> with the name of the bare-metal node's external gateway. 9 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 10 Replace <nic1_mac_address> with the MAC address of the bare-metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 11 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace <bmc_url> with the URL of the bare-metal node's baseboard management controller. 12 Optional: Replace <root_device_hint> with a device path when specifying a root device hint. See "Root device hints" for additional details. When configuring the network interface with a static configuration by using nmstate , set state: up with the IP addresses set to enabled: false : --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # ... interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false # ... To deploy with a DHCP configuration, create the following bmh.yaml file: --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5 1 Replace <num> with a unique compute node number for the bare-metal nodes in the name and credentialsName fields. 2 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 3 Replace <nic1_mac_address> with the MAC address of the bare-metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 4 Replace <protocol> with the BMC protocol, such as IPMI, Redfish, or others. Replace <bmc_url> with the URL of the bare-metal node's baseboard management controller. 5 Optional: Replace <root_device_hint> with a device path when specifying a root device hint. See "Root device hints" for additional details. Important If the MAC address of an existing bare-metal node matches the MAC address of the bare-metal host that you are attempting to provision, then the installation will fail. If the host enrollment, inspection, cleaning, or other steps fail, the Bare Metal Operator retries the installation continuously. See "Diagnosing a duplicate MAC address when provisioning a new host in the cluster" for additional details. Create the bare-metal node by running the following command: USD oc create -f bmh.yaml Example output secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created Inspect the bare-metal node by running the following command: USD oc -n openshift-machine-api get bmh openshift-worker-<num> where: <num> Specifies the compute node number. Example output NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true Approve all certificate signing requests (CSRs). Get the list of pending CSRs by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending Approve the CSR by running the following command: USD oc adm certificate approve <csr_name> Example output certificatesigningrequest.certificates.k8s.io/<csr_name> approved Verification Verify that the node is ready by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd Additional resources Preparing the bare-metal node Root device hints Diagnosing a duplicate MAC address when provisioning a new host in the cluster 5.4. Optional: Managing existing hosts in a user-provisioned cluster by using the BMO Optionally, you can use the Bare Metal Operator (BMO) to manage existing bare-metal controller hosts in a user-provisioned cluster by creating a BareMetalHost object for the existing host. It is not a requirement to manage existing user-provisioned hosts; however, you can enroll them as externally-provisioned hosts for inventory purposes. Important To manage existing hosts by using the BMO, you must set the spec.externallyProvisioned specification in the BareMetalHost custom resource to true to prevent the BMO from re-provisioning the host. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Create the Secret CR and the BareMetalHost CR. Save the following YAML in the controller.yaml file: --- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: "controller1-bmc" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api 1 You can only use bare-metal host drivers that support virtual media networking booting, for example redfish-virtualmedia and idrac-virtualmedia . 2 You must set the value to true to prevent the BMO from re-provisioning the bare-metal controller host. Create the bare-metal host object by running the following command: USD oc create -f controller.yaml Example output secret/controller1-bmc created baremetalhost.metal3.io/controller1 created Verification Verify that the BMO created the bare-metal host object by running the following command: USD oc get bmh -A Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s 5.5. Removing hosts from a user-provisioned cluster by using the BMO You can use the Bare Metal Operator (BMO) to remove bare-metal hosts from a user-provisioned cluster. Prerequisites You created a user-provisioned bare-metal cluster. You have baseboard management controller (BMC) access to the hosts. You deployed a provisioning service in the cluster by creating a Provisioning CR. Procedure Cordon and drain the node by running the following command: USD oc adm drain app1 --force --ignore-daemonsets=true Example output node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained Delete the customDeploy specification from the BareMetalHost CR. Edit the BareMetalHost CR for the host by running the following command: USD oc edit bmh -n openshift-machine-api <host_name> Delete the lines spec.customDeploy and spec.customDeploy.method : ... customDeploy: method: install_coreos Verify that the provisioning state of the host changes to deprovisioning by running the following command: USD oc get bmh -A Example output NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m Delete the host by running the following command when the BareMetalHost state changes to available : USD oc delete bmh -n openshift-machine-api <bmh_name> Note You can run this step without having to edit the BareMetalHost CR. It might take some time for the BareMetalHost state to change from deprovisioning to available . Delete the node by running the following command: USD oc delete node <node_name> Verification Verify that you deleted the node by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd | [
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: \"Disabled\" watchAllNamespaces: false",
"oc create -f provisioning.yaml",
"provisioning.metal3.io/provisioning-configuration created",
"oc get pods -n openshift-machine-api",
"NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 7 next-hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5",
"oc create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending",
"oc adm certificate approve <csr_name>",
"certificatesigningrequest.certificates.k8s.io/<csr_name> approved",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd",
"--- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: \"controller1-bmc\" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api",
"oc create -f controller.yaml",
"secret/controller1-bmc created baremetalhost.metal3.io/controller1 created",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s",
"oc adm drain app1 --force --ignore-daemonsets=true",
"node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained",
"oc edit bmh -n openshift-machine-api <host_name>",
"customDeploy: method: install_coreos",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m",
"oc delete bmh -n openshift-machine-api <bmh_name>",
"oc delete node <node_name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_bare_metal/scaling-a-user-provisioned-cluster-with-the-bare-metal-operator |
10.8. Preparser | 10.8. Preparser If it is desirable to manipulate incoming queries prior to being handled by Teiid logic, then a custom pre-parser can be installed. Use the PreParser interface provided in the org.teiid.api jar to plug-in a pre-parser for the Teiid engine. See Setting up the build environment to start development. import org.teiid.PreParser; ... package com.something; public class CustomPreParser implements PreParser { @Override public String preParse(String command, CommandContext context) { //manipulate the command } } , build a JAR archive with above implementation class and create a file named org.teiid.PreParser in the META-INF/services directory with these contents: com.something.CustomPreParser The JAR has now been built. Deploy it in the JBoss AS as a module under jboss-as/modules directory. Now create a module: Create a directory called jboss-as/modules/com/something/main. In it create a "module.xml" file with these contents: <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.0" name="com.something"> <resources> <resource-root path="something.jar" /> </resources> <dependencies> <module name="javax.api"/> <module name="javax.resource.api"/> <module name="org.jboss.teiid.common-core"/> <module name="org.jboss.teiid.teiid-api" /> </dependencies> </module> Copy the jar file under this same directory. Make sure you add any additional dependencies if required by your implementation class under dependencies. Use the command line interface or modify the configuration to set the preparser-module in the Teiid subsystem configuration to the appropriate module name. Restart the server Important Development Considerations Changing the incoming query to a different type of statement is not recommended as are any modifications to the number or types of projected symbols. | [
"import org.teiid.PreParser; package com.something; public class CustomPreParser implements PreParser { @Override public String preParse(String command, CommandContext context) { //manipulate the command } }",
"com.something.CustomPreParser",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.0\" name=\"com.something\"> <resources> <resource-root path=\"something.jar\" /> </resources> <dependencies> <module name=\"javax.api\"/> <module name=\"javax.resource.api\"/> <module name=\"org.jboss.teiid.common-core\"/> <module name=\"org.jboss.teiid.teiid-api\" /> </dependencies> </module>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/preparser |
probe::vm.mmap | probe::vm.mmap Name probe::vm.mmap - Fires when an mmap is requested Synopsis vm.mmap Values name name of the probe point length the length of the memory segment address the requested address Context The process calling mmap. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-vm-mmap |
Chapter 13. High availability and clusters | Chapter 13. High availability and clusters In Red Hat Enterprise Linux 8, pcs fully supports the Corosync 3 cluster engine and the Kronosnet (knet) network abstraction layer for cluster communication. When planning an upgrade to a RHEL 8 cluster from an existing RHEL 7 cluster, some of the considerations you must take into account are as follows: Application versions: What version of the highly-available application will the RHEL 8 cluster require? Application process order: What may need to change in the start and stop processes of the application? Cluster infrastructure: Since pcs supports multiple network connections in RHEL 8, does the number of NICs known to the cluster change? Needed packages: Do you need to install all of the same packages on the new cluster? Because of these and other considerations for running a Pacemaker cluster in RHEL 8, it is not possible to perform in-place upgrades from RHEL 7 to RHEL 8 clusters and you must configure a new cluster in RHEL 8. You cannot run a cluster that includes nodes running both RHEL 7 and RHEL 8. Additionally, you should plan for the following before performing an upgrade: Final cutover: What is the process to stop the application running on the old cluster and start it on the new cluster to reduce application downtime? Testing: Is it possible to test your upgrade strategy ahead of time in a development/test environment? The major differences in cluster creation and administration between RHEL 7 and RHEL 8 are listed in the following sections. 13.1. New formats for pcs cluster setup , pcs cluster node add and pcs cluster node remove commands In Red Hat Enterprise Linux 8, pcs fully supports the use of node names, which are now required and replace node addresses in the role of node identifier. Node addresses are now optional. In the pcs host auth command, node addresses default to node names. In the pcs cluster setup and pcs cluster node add commands, node addresses default to the node addresses specified in the pcs host auth command. With these changes, the formats for the commands to set up a cluster, add a node to a cluster, and remove a node from a cluster have changed. For information about these new command formats, see the help display for the pcs cluster setup , pcs cluster node add and pcs cluster node remove commands. 13.2. Master resources renamed to promotable clone resources Red Hat Enterprise Linux (RHEL) 8 supports Pacemaker 2.0, in which a master/slave resource is no longer a separate type of resource but a standard clone resource with a promotable meta-attribute set to true . The following changes have been implemented in support of this update: It is no longer possible to create master resources with the pcs command. Instead, it is possible to create promotable clone resources. Related keywords and commands have been changed from master to promotable . All existing master resources are displayed as promotable clone resources. When managing a RHEL7 cluster in the Web UI, master resources are still called master, as RHEL7 clusters do not support promotable clones. 13.3. New commands for authenticating nodes in a cluster Red Hat Enterprise Linux (RHEL) 8 incorporates the following changes to the commands used to authenticate nodes in a cluster. The new command for authentication is pcs host auth . This command allows users to specify host names, addresses and pcsd ports. The pcs cluster auth command authenticates only the nodes in a local cluster and does not accept a node list It is now possible to specify an address for each node. pcs / pcsd will then communicate with each node using the specified address. These addresses can be different than the ones corosync uses internally. The pcs pcsd clear-auth command has been replaced by the pcs pcsd deauth and pcs host deauth commands. The new commands allow users to deauthenticate a single host as well as all hosts. Previously, node authentication was bidirectional, and running the pcs cluster auth command caused all specified nodes to be authenticated against each other. The pcs host auth command, however, causes only the local host to be authenticated against the specified nodes. This allows better control of what node is authenticated against what other nodes when running this command. On cluster setup itself, and also when adding a node, pcs automatically synchronizes tokens on the cluster, so all nodes in the cluster are still automatically authenticated as before and the cluster nodes can communicate with each other. Note that these changes are not backward compatible. Nodes that were authenticated on a RHEL 7 system will need to be authenticated again. 13.4. LVM volumes in a Red Hat High Availability active/passive cluster When configuring LVM volumes as resources in a Red Hat HA active/passive cluster in RHEL 8, you configure the volumes as an LVM-activate resource. In RHEL 7, you configured the volumes as an LVM resource. For an example of a cluster configuration procedure that includes configuring an LVM volume as a resource in an active/passive cluster in RHEL 8, see Configuring an active/passive Apache HTTP server in a Red Hat High Availability cluster . 13.5. Shared LVM volumes in a Red Hat High Availability active/active cluster In RHEL 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage devices in an active/active cluster. This requires that you configure the logical volumes on which you mount a GFS2 file system as shared logical volumes. Additionally, this requires that you use the LVM-activate resource agent to manage an LVM volume and that you use the lvmlockd resource agent to manage the lvmlockd daemon. For a full procedure for configuring a RHEL 8 Pacemaker cluster that includes GFS2 file systems using shared logical volumes, see Configuring a GFS2 file system in a cluster . 13.6. GFS2 file systems in a RHEL 8 Pacemaker cluster In RHEL 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage devices in an active/active cluster as described in Section 12.3.1, "Removal of clvmd for managing shared storage devices" . To use GFS2 file systems that were created on a RHEL 7 system in a RHEL 8 cluster, you must configure the logical volumes on which they are mounted as shared logical volumes in a RHEL 8 system, and you must start locking for the volume group. For an example of the procedure that configures existing RHEL 7 logical volumes as shared logical volumes for use in a RHEL 8 Pacemaker cluster, see Migrating a GFS2 file system from RHEL7 to RHEL8 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/high-availability-and-clusters_considerations-in-adopting-rhel-8 |
Chapter 8. Updating an instance | Chapter 8. Updating an instance You can add and remove additional resources from running instances, such as persistent volume storage, a network interface, or a public IP address. You can also update instance metadata and the security groups that the instance belongs to. 8.1. Attaching a network to an instance You can attach a network to a running instance. When you attach a network to the instance, the Compute service creates the port on the network for the instance. Use a network to attach the network interface to an instance when you want to use the default security group and there is only one subnet on the network. Procedure Identify the available networks and note the name or ID of the network that you want to attach to your instance: If the network that you need is not available, create a new network: Attach the network to your instance: Optional: Include the --tag option and replace <tag_name> with the name of a tag for your virtual NIC device. Replace <instance> with the name or ID of the instance that you want to attach the network to. Replace <network> with the name or ID of the network that you want to attach to the instance. Tip To tag a virtual device at server creation time, see Tagging virtual devices . Additional resources openstack network create command in the Command line interface reference . Creating a network in the Configuring Red Hat OpenStack Platform networking guide. 8.2. Detaching a network from an instance You can detach a network from an instance. Note Detaching the network detaches all network ports. If the instance has multiple ports on a network and you want to detach only one of those ports, follow the Detaching a port from an instance procedure to detach the port. Procedure Identify the network that is attached to the instance: Detach the network from the instance: Replace <instance> with the name or ID of the instance that you want to remove the network from. Replace <network> with the name or ID of the network that you want to remove from the instance. 8.3. Attaching a port to an instance You can attach a network interface to a running instance by using a port. You can attach a port to only one instance at a time. Use a port to attach the network interface to an instance when you want to use a custom security group, or when there are multiple subnets on the network. Tip If you attach the network interface by using a network, the port is created automatically. For more information, see Attaching a network to an instance . Note Red Hat OpenStack Platform (RHOSP) provides up to 24 interfaces for each instance. By default, you can add up to 16 PCIe devices to an instance before you must reboot the instance to add more. The RHOSP administrator can use the NovaLibvirtNumPciePorts parameter to configure the number of PCIe devices that can be added to an instance, before a reboot of the instance is required to add more devices. Prerequisites If attaching a port with an SR-IOV vNIC to an instance, there must be a free SR-IOV device on the host on the appropriate physical network, and the instance must have a free PCIe slot. Procedure Create the port that you want to attach to your instance: Replace <network> with the name or ID of the network to create the port on. Optional: To create an SR-IOV port, replace <vnic-type> with one of the following values: direct : Creates a direct mode SR-IOV virtual function (VF) port. direct-physical : Creates a direct mode SR-IOV physical function (PF) port. macvtap : Creates an SR-IOV port that is attached to the instance through a MacVTap device. Replace <port> with the name or ID of the port that you want to attach to the instance. Attach the port to your instance: Replace <instance> with the name or ID of the instance that you want to attach the port to. Replace <port> with the name or ID of the port that you want to attach to the instance. Verify that the port is attached to your instance: Replace <instance_UUID> with the UUID of the instance that you attached the port to. Additional resources openstack port create command in the Command line interface reference . 8.4. Detaching a port from an instance You can detach a port from an instance. Procedure Identify the port that is attached to the instance: Detach the port from the instance: Replace <instance> with the name or ID of the instance that you want to remove the port from. Replace <port> with the name or ID of the port that you want to remove from the instance. 8.5. Attaching a volume to an instance You can attach a volume to an instance for persistent storage. You can attach a volume to only one instance at a time, unless the volume has been configured as a multi-attach volume. For more information about creating multi-attach volumes, see Volumes that can be attached to multiple instances . Prerequisites To attach a multi-attach volume, the environment variable OS_COMPUTE_API_VERSION is set to 2.60 or later. The instance is fully operational, or fully stopped. You cannot attach a volume to an instance when the instance is in the process of booting up or shutting down. To attach more than 26 volumes to your instance, the image you used to create the instance must have the following properties: hw_scsi_model=virtio-scsi hw_disk_bus=scsi Procedure Identify the available volumes and note the name or ID of the volume that you want to attach to your instance: Attach the volume to your instance: Optional: Include the --tag option and replace <tag_name> with the name of a tag for your virtual storage device. Replace <instance> with the name or ID of the instance that you want to attach the volume to. Replace <volume> with the name or ID of the volume that you want to attach to the instance. Note To tag a virtual device at server creation time, see Tagging virtual devices . Note If the command returns the following error, the volume you chose to attach to the instance is a multi-attach volume, therefore you must use Compute API version 2.60 or later: You can either set the environment variable OS_COMPUTE_API_VERSION=2.72 , or include the --os-compute-api-version argument when adding the volume to the instance: Tip Specify --os-compute-api-version 2.20 or higher to add a volume to an instance with status SHELVED or SHELVED_OFFLOADED . Confirm that the volume is attached to the instance or instances: Replace <volume> with the name or ID of the volume to display. Example output: 8.6. Viewing the volumes attached to an instance You can view the volumes attached to a particular instance. Prerequisites You are using python-openstackclient 5.5.0 . Procedure List the volumes attached to an instance: 8.7. Detaching a volume from an instance You can detach a volume from an instance. Note Detaching the network detaches all network ports. If the instance has multiple ports on a network and you want to detach only one of those ports, follow the Detaching a port from an instance procedure to detach the port. Prerequisites The instance is fully operational, or fully stopped. You cannot detach a volume from an instance when the instance is in the process of booting up or shutting down. Procedure Identify the volume that is attached to the instance: Detach the volume from the instance: Replace <instance> with the name or ID of the instance that you want to remove the volume from. Replace <volume> with the name or ID of the volume that you want to remove from the instance. Note Specify --os-compute-api-version 2.20 or higher to remove a volume from an instance with status SHELVED or SHELVED_OFFLOADED . | [
"(overcloud)USD openstack network list",
"(overcloud)USD openstack network create <network>",
"openstack server add network [--tag <tag_name>] <instance> <network>",
"(overcloud)USD openstack server show <instance>",
"openstack server remove network <instance> <network>",
"openstack port create --network <network> [--vnic-type <vnic-type>] <port>",
"openstack server add port <instance> <port>",
"openstack port list --device-id <instance_UUID>",
"(overcloud)USD openstack server show <instance>",
"openstack server remove port <instance> <port>",
"(overcloud)USD openstack volume list",
"openstack server add volume [--tag <tag_name>] <instance> <volume>",
"Multiattach volumes are only supported starting with compute API version 2.60. (HTTP 400) (Request-ID: req-3a969c31-e360-4c79-a403-75cc6053c9e5)",
"openstack --os-compute-api-version 2.72 server add volume <instance> <volume>",
"openstack volume show <volume>",
"+-----------------------------------------------------+----------------------+---------+-----+-----------------------------------------------------------------------------------------------+ | ID | Name | Status | Size| Attached to +-----------------------------------------------------+---------------------+---------+------+---------------------------------------------------------------------------------------------+ | f3fb92f6-c77b-429f-871d-65b1e3afa750 | volMultiattach | in-use | 50 | Attached to instance1 on /dev/vdb Attached to instance2 on /dev/vdb | +-----------------------------------------------------+----------------------+---------+-----+-----------------------------------------------------------------------------------------------+",
"openstack server volume list <instance> +---------------------+----------+---------------------+-----------------------+ | ID | Device | Server ID | Volume ID | +---------------------+----------+---------------------+-----------------------+ | 1f9dcb02-9a20-4a4b- | /dev/vda | ab96b635-1e63-4487- | 1f9dcb02-9a20-4a4b-9f | | 9f25-c7846a1ce9e8 | | a85c-854197cd537b | 25-c7846a1ce9e8 | +---------------------+----------+---------------------+-----------------------+",
"(overcloud)USD openstack server show <instance>",
"openstack server remove volume <instance> <volume>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/assembly_updating-an-instance_osp |
Chapter 26. Monitoring Stratis file systems | Chapter 26. Monitoring Stratis file systems As a Stratis user, you can view information about Stratis volumes on your system to monitor their state and free space. Important Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 26.1. Displaying information about Stratis volumes You can lists statistics about your Stratis volumes, such as the total, used, and free size or file systems and block devices belonging to a pool, by using the stratis utility. Standard Linux utilities such as df report the size of the XFS file system layer on Stratis, which is 1 TiB. This is not useful information, because the actual storage usage of Stratis is less due to thin provisioning, and also because Stratis automatically grows the file system when the XFS layer is close to full. Important Regularly monitor the amount of data written to your Stratis file systems, which is reported as the Total Physical Used value. Make sure it does not exceed the Total Physical Size value. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. Procedure To display information about all block devices used for Stratis on your system: To display information about all Stratis pools on your system: To display information about all Stratis file systems on your system: Additional resources stratis(8) man page on your system 26.2. Viewing a Stratis pool by using the web console You can use the web console to view an existing Stratis pool and the file systems it contains. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. You have an existing Stratis pool. Procedure Log in to the RHEL 8 web console. Click Storage . In the Storage table, click the Stratis pool you want to view. The Stratis pool page displays all the information about the pool and the file systems that you created in the pool. | [
"stratis blockdev Pool Name Device Node Physical Size State Tier my-pool /dev/sdb 9.10 TiB In-use Data",
"stratis pool Name Total Physical Size Total Physical Used my-pool 9.10 TiB 598 MiB",
"stratis filesystem Pool Name Name Used Created Device my-pool my-fs 546 MiB Nov 08 2018 08:03 /dev/stratis/ my-pool/my-fs"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/monitoring-stratis-file-systems |
Chapter 3. Disabling the hot corner functionality on GNOME Shell | Chapter 3. Disabling the hot corner functionality on GNOME Shell The GNOME environment provides the hot corner functionality, which is enabled by default. This means that when you move the cursor to the area of the upper-left corner and push the cursor to the screen corner, the Activities Overview menu opens automatically. However, you may want to disable this feature to not open Activities Overview unintentionally. 3.1. Disabling hot corner using Settings To disable the hot corner functionality using the Settings application, follow this procedure. Note This procedure disables the hot corner functionality for a single user. Procedure Open the Settings application by clicking the gear button. In the Settings application, go to Multitasking . In the General section, disable the Hot Corner button. Disabling hot corner using the Settings application 3.2. Disabling hot corner using gsettings To disable the hot corner functionality using the gsettings command-line utility, follow this procedure. Procedure Disable the hot corner feature: Verification Optionally, verify that the hot corner feature is disabled: 3.3. Disabling the hot corner functionality for all users To disable the hot corner functionality for all users, you need to create a dconf profile. Procedure Create the user profile in the /etc/dconf/profile/user file. Create a file in the /etc/dconf/db/local.d/ directory, for example /etc/dconf/db/local.d/00-interface , with the following content: Create a file in the /etc/dconf/db/local.d/locks directory, for example /etc/dconf/db/local.d/locks/00-interface , with the following content: The configuration file locks down the /org/gnome/desktop/interface/enable-hot-corners key for all users. This key controls whether the hot corner is enabled. Update the system databases for the changes to take effect. Ensure that all users log out. The changes take effect when users log back in. | [
"gsettings set org.gnome.desktop.interface enable-hot-corners false",
"gsettings get org.gnome.desktop.interface enable-hot-corners false",
"user-db:user system-db:local",
"Specify the dconf path GSettings key names and their corresponding values enable-hot-corners='FALSE'",
"Prevent users from changing values for the following keys: /org/gnome/desktop/interface/enable-hot-corners",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/disabling-the-hot-corner-functionality-on-gnome-shell_getting-started-with-the-gnome-desktop-environment |
2.2. Generating Instrumentation for Other Computers | 2.2. Generating Instrumentation for Other Computers When users run a SystemTap script, a kernel module is built out of that script. SystemTap then loads the module into the kernel, allowing it to extract the specified data directly from the kernel (see Procedure 3.1, "SystemTap Session" in Section 3.1, "Architecture" for more information). Normally, SystemTap scripts can only be run on systems where SystemTap is deployed (as in Section 2.1, "Installation and Setup" ). This could mean that to run SystemTap on ten systems, SystemTap needs to be deployed on all those systems. In some cases, this may be neither feasible nor desired. For instance, corporate policy may prohibit an administrator from installing packages that provide compilers or debug information on specific machines, which will prevent the deployment of SystemTap. To work around this, use cross-instrumentation . Cross-instrumentation is the process of generating SystemTap instrumentation modules from a SystemTap script on one computer to be used on another computer. This process offers the following benefits: The kernel information packages for various machines can be installed on a single host machine . Each target machine only needs one package to be installed to use the generated SystemTap instrumentation module: systemtap-runtime . Important The host system must be the same architecture and running the same distribution of Linux as the target system in order for the built instrumentation module to work. Note For the sake of simplicity, the following terms will be used throughout this section: instrumentation module The kernel module built from a SystemTap script; the SystemTap module is built on the host system , and will be loaded on the target kernel of the target system . host system The system on which the instrumentation modules (from SystemTap scripts) are compiled, to be loaded on target systems . target system The system in which the instrumentation module is being built (from SystemTap scripts). target kernel The kernel of the target system . This is the kernel which loads/runs the instrumentation module . Procedure 2.1. Configuring a Host System and Target Systems Install the systemtap-runtime package on each target system . Determine the kernel running on each target system by running uname -r on each target system . Install SystemTap on the host system . The instrumentation module will be built for the target systems on the host system . For instructions on how to install SystemTap, see Section 2.1.1, "Installing SystemTap" . Using the target kernel version determined earlier, install the target kernel and related packages on the host system by the method described in Section 2.1.2, "Installing Required Kernel Information Packages" . If multiple target systems use different target kernels , repeat this step for each different kernel used on the target systems . After performing Procedure 2.1, "Configuring a Host System and Target Systems" , the instrumentation module (for any target system ) can be built on the host system . To build the instrumentation module , run the following command on the host system (be sure to specify the appropriate values): Here, kernel_version refers to the version of the target kernel (the output of uname -r on the target machine), script refers to the script to be converted into an instrumentation module , and module_name is the desired name of the instrumentation module . Once the instrumentation module is compiled, copy it to the target system and then load it using: For example, to create the simple.ko instrumentation module from a SystemTap script named simple.stp for the 3.10.0-327.4.4.el7 target kernel , use the following command: This will create a module named simple.ko . To use the simple.ko instrumentation module , copy it to the target system and run the following command (on the target system ): | [
"stap -r kernel_version script -m module_name -p4",
"staprun module_name .ko",
"stap -r 2.6.32-53.el6 -e 'probe vfs.read {exit()}' -m simple -p4",
"staprun simple.ko"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/cross-compiling |
Feature Support Document | Feature Support Document Red Hat JBoss Data Grid 6.6 For use with Red Hat JBoss Data Grid 6.6.1 Christian Huffman Red Hat Engineering Content Services [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/feature_support_document/index |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.