title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 5. Using the Node Tuning Operator
Chapter 5. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. 5.1. About the Node Tuning Operator The Node Tuning Operator helps you manage node-level tuning by orchestrating the Tuned daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized Tuned daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized Tuned daemon are rolled back on an event that triggers a profile change or when the containerized Tuned daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. 5.2. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run: USD oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator. 5.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: "openshift" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: "openshift-control-plane" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # "cache hot" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: "openshift-node" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: "openshift-control-plane" priority: 30 match: - label: "node-role.kubernetes.io/master" - label: "node-role.kubernetes.io/infra" - profile: "openshift-node" priority: 40 5.4. Verifying that the Tuned profiles are applied Use this procedure to check which Tuned profiles are applied on every node. Procedure Check which Tuned pods are running on each node: USD oc get pods -n openshift-cluster-node-tuning-operator -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-node-tuning-operator-599489d4f7-k4hw4 1/1 Running 0 6d2h 10.129.0.76 ip-10-0-145-113.eu-west-3.compute.internal <none> <none> tuned-2jkzp 1/1 Running 1 6d3h 10.0.145.113 ip-10-0-145-113.eu-west-3.compute.internal <none> <none> tuned-g9mkx 1/1 Running 1 6d3h 10.0.147.108 ip-10-0-147-108.eu-west-3.compute.internal <none> <none> tuned-kbxsh 1/1 Running 1 6d3h 10.0.132.143 ip-10-0-132-143.eu-west-3.compute.internal <none> <none> tuned-kn9x6 1/1 Running 1 6d3h 10.0.163.177 ip-10-0-163-177.eu-west-3.compute.internal <none> <none> tuned-vvxwx 1/1 Running 1 6d3h 10.0.131.87 ip-10-0-131-87.eu-west-3.compute.internal <none> <none> tuned-zqrwq 1/1 Running 1 6d3h 10.0.161.51 ip-10-0-161-51.eu-west-3.compute.internal <none> <none> Extract the profile applied from each pod and match them against the list: USD for p in `oc get pods -n openshift-cluster-node-tuning-operator -l openshift-app=tuned -o=jsonpath='{range .items[*]}{.metadata.name} {end}'`; do printf "\n*** USDp ***\n" ; oc logs pod/USDp -n openshift-cluster-node-tuning-operator | grep applied; done Example output *** tuned-2jkzp *** 2020-07-10 13:53:35,368 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied *** tuned-g9mkx *** 2020-07-10 14:07:17,089 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:29,005 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied 2020-07-10 16:00:19,006 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 16:00:48,989 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-kbxsh *** 2020-07-10 13:53:30,565 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:30,199 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-kn9x6 *** 2020-07-10 14:10:57,123 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:28,757 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-vvxwx *** 2020-07-10 14:11:44,932 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied *** tuned-zqrwq *** 2020-07-10 14:07:40,246 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied 5.5. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of Tuned profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized Tuned daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists Tuned profiles and their names. profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned # ... - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in Tuned operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized Tuned daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized Tuned daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized Tuned pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. 5.6. Custom tuning example The following CR applies custom node-level tuning for OpenShift Container Platform nodes with label tuned.openshift.io/ingress-node-label set to any value. As an administrator, use the following command to create a custom Tuned CR. Custom tuning example USD oc create -f- <<_EOF_ apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range="1024 65535" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress _EOF_ Important Custom profile writers are strongly encouraged to include the default Tuned daemon profiles shipped within the default Tuned CR. The example above uses the default openshift-control-plane profile to accomplish this. 5.7. Supported Tuned daemon plug-ins Excluding the [main] section, the following Tuned plug-ins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm There is some dynamic tuning functionality provided by some of these plug-ins that is not supported. The following Tuned plug-ins are currently not supported: bootloader script systemd See Available Tuned Plug-ins and Getting Started with Tuned for more information.
[ "oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: \"openshift\" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: \"openshift-control-plane\" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # \"cache hot\" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: \"openshift-node\" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40", "oc get pods -n openshift-cluster-node-tuning-operator -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cluster-node-tuning-operator-599489d4f7-k4hw4 1/1 Running 0 6d2h 10.129.0.76 ip-10-0-145-113.eu-west-3.compute.internal <none> <none> tuned-2jkzp 1/1 Running 1 6d3h 10.0.145.113 ip-10-0-145-113.eu-west-3.compute.internal <none> <none> tuned-g9mkx 1/1 Running 1 6d3h 10.0.147.108 ip-10-0-147-108.eu-west-3.compute.internal <none> <none> tuned-kbxsh 1/1 Running 1 6d3h 10.0.132.143 ip-10-0-132-143.eu-west-3.compute.internal <none> <none> tuned-kn9x6 1/1 Running 1 6d3h 10.0.163.177 ip-10-0-163-177.eu-west-3.compute.internal <none> <none> tuned-vvxwx 1/1 Running 1 6d3h 10.0.131.87 ip-10-0-131-87.eu-west-3.compute.internal <none> <none> tuned-zqrwq 1/1 Running 1 6d3h 10.0.161.51 ip-10-0-161-51.eu-west-3.compute.internal <none> <none>", "for p in `oc get pods -n openshift-cluster-node-tuning-operator -l openshift-app=tuned -o=jsonpath='{range .items[*]}{.metadata.name} {end}'`; do printf \"\\n*** USDp ***\\n\" ; oc logs pod/USDp -n openshift-cluster-node-tuning-operator | grep applied; done", "*** tuned-2jkzp *** 2020-07-10 13:53:35,368 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied *** tuned-g9mkx *** 2020-07-10 14:07:17,089 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:29,005 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied 2020-07-10 16:00:19,006 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 16:00:48,989 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-kbxsh *** 2020-07-10 13:53:30,565 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:30,199 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-kn9x6 *** 2020-07-10 14:10:57,123 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node' applied 2020-07-10 15:56:28,757 INFO tuned.daemon.daemon: static tuning from profile 'openshift-node-es' applied *** tuned-vvxwx *** 2020-07-10 14:11:44,932 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied *** tuned-zqrwq *** 2020-07-10 14:07:40,246 INFO tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied", "profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "oc create -f- <<_EOF_ apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress _EOF_" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/using-node-tuning-operator
Chapter 15. IngressController [operator.openshift.io/v1]
Chapter 15. IngressController [operator.openshift.io/v1] Description IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the IngressController. status object status is the most recently observed status of the IngressController. 15.1.1. .spec Description spec is the specification of the desired behavior of the IngressController. Type object Property Type Description clientTLS object clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. defaultCertificate object defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. domain string domain is a DNS name serviced by the ingress controller and is used to configure multiple features: * For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy. * When using a generated default certificate, the certificate will be valid for domain and its subdomains. See defaultCertificate. * The value is published to individual Route statuses so that end-users know where to target external DNS records. domain must be unique among all IngressControllers, and cannot be updated. If empty, defaults to ingress.config.openshift.io/cluster .spec.domain. endpointPublishingStrategy object endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. httpCompression object httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. httpEmptyRequestsPolicy string httpEmptyRequestsPolicy describes how HTTP connections should be handled if the connection times out before a request is received. Allowed values for this field are "Respond" and "Ignore". If the field is set to "Respond", the ingress controller sends an HTTP 400 or 408 response, logs the connection (if access logging is enabled), and counts the connection in the appropriate metrics. If the field is set to "Ignore", the ingress controller closes the connection without sending a response, logging the connection, or incrementing metrics. The default value is "Respond". Typically, these connections come from load balancers' health probes or Web browsers' speculative connections ("preconnect") and can be safely ignored. However, these requests may also be caused by network errors, and so setting this field to "Ignore" may impede detection and diagnosis of problems. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. httpErrorCodePages object httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. httpHeaders object httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. logging object logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. namespaceSelector object namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. nodePlacement object nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. replicas integer replicas is the desired number of ingress controller replicas. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. The value of replicas is set based on the value of a chosen field in the Infrastructure CR. If defaultPlacement is set to ControlPlane, the chosen field will be controlPlaneTopology. If it is set to Workers the chosen field will be infrastructureTopology. Replicas will then be set to 1 or 2 based whether the chosen field's value is SingleReplica or HighlyAvailable, respectively. These defaults are subject to change. routeAdmission object routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. routeSelector object routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. tlsSecurityProfile object tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. tuningOptions object tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. unsupportedConfigOverrides `` unsupportedConfigOverrides allows specifying unsupported configuration options. Its use is unsupported. 15.1.2. .spec.clientTLS Description clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. Type object Required clientCA clientCertificatePolicy Property Type Description allowedSubjectPatterns array (string) allowedSubjectPatterns specifies a list of regular expressions that should be matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. If this list is empty, no filtering is performed. If the list is nonempty, then at least one pattern must match a client certificate's distinguished name or else the ingress controller rejects the certificate and denies the connection. clientCA object clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. clientCertificatePolicy string clientCertificatePolicy specifies whether the ingress controller requires clients to provide certificates. This field accepts the values "Required" or "Optional". Note that the ingress controller only checks client certificates for edge-terminated and reencrypt TLS routes; it cannot check certificates for cleartext HTTP or passthrough TLS routes. 15.1.3. .spec.clientTLS.clientCA Description clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.4. .spec.defaultCertificate Description defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 15.1.5. .spec.endpointPublishingStrategy Description endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.6. .spec.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.7. .spec.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.8. .spec.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.9. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.10. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. 15.1.11. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object 15.1.12. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.13. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.14. .spec.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.15. .spec.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.16. .spec.httpCompression Description httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. Type object Property Type Description mimeTypes array (string) mimeTypes is a list of MIME types that should have compression applied. This list can be empty, in which case the ingress controller does not apply compression. Note: Not all MIME types benefit from compression, but HAProxy will still use resources to try to compress if instructed to. Generally speaking, text (html, css, js, etc.) formats benefit from compression, but formats that are already compressed (image, audio, video, etc.) benefit little in exchange for the time and cpu spent on compressing again. See https://joehonton.medium.com/the-gzip-penalty-d31bd697f1a2 15.1.17. .spec.httpErrorCodePages Description httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.18. .spec.httpHeaders Description httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. Type object Property Type Description actions object actions specifies options for modifying headers and their values. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be modified for TLS passthrough connections. Setting the HSTS ( Strict-Transport-Security ) header is not supported via actions. Strict-Transport-Security may only be configured using the "haproxy.router.openshift.io/hsts_header" route annotation, and only in accordance with the policy specified in Ingress.Spec.RequiredHSTSPolicies. Any actions defined here are applied after any actions related to the following other fields: cache-control, spec.clientTLS, spec.httpHeaders.forwardedHeaderPolicy, spec.httpHeaders.uniqueId, and spec.httpHeaders.headerNameCaseAdjustments. In case of HTTP request headers, the actions specified in spec.httpHeaders.actions on the Route will be executed after the actions specified in the IngressController's spec.httpHeaders.actions field. In case of HTTP response headers, the actions specified in spec.httpHeaders.actions on the IngressController will be executed after the actions specified in the Route's spec.httpHeaders.actions field. Headers set using this API cannot be captured for use in access logs. The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. Please refer to the documentation for that API field for more details. forwardedHeaderPolicy string forwardedHeaderPolicy specifies when and how the IngressController sets the Forwarded, X-Forwarded-For, X-Forwarded-Host, X-Forwarded-Port, X-Forwarded-Proto, and X-Forwarded-Proto-Version HTTP headers. The value may be one of the following: * "Append", which specifies that the IngressController appends the headers, preserving existing headers. * "Replace", which specifies that the IngressController sets the headers, replacing any existing Forwarded or X-Forwarded-* headers. * "IfNone", which specifies that the IngressController sets the headers if they are not already set. * "Never", which specifies that the IngressController never sets the headers, preserving any existing headers. By default, the policy is "Append". headerNameCaseAdjustments `` headerNameCaseAdjustments specifies case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying "X-Forwarded-For" indicates that the "x-forwarded-for" HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. uniqueId object uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. 15.1.19. .spec.httpHeaders.actions Description actions specifies options for modifying headers and their values. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be modified for TLS passthrough connections. Setting the HSTS ( Strict-Transport-Security ) header is not supported via actions. Strict-Transport-Security may only be configured using the "haproxy.router.openshift.io/hsts_header" route annotation, and only in accordance with the policy specified in Ingress.Spec.RequiredHSTSPolicies. Any actions defined here are applied after any actions related to the following other fields: cache-control, spec.clientTLS, spec.httpHeaders.forwardedHeaderPolicy, spec.httpHeaders.uniqueId, and spec.httpHeaders.headerNameCaseAdjustments. In case of HTTP request headers, the actions specified in spec.httpHeaders.actions on the Route will be executed after the actions specified in the IngressController's spec.httpHeaders.actions field. In case of HTTP response headers, the actions specified in spec.httpHeaders.actions on the IngressController will be executed after the actions specified in the Route's spec.httpHeaders.actions field. Headers set using this API cannot be captured for use in access logs. The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. Please refer to the documentation for that API field for more details. Type object Property Type Description request array request is a list of HTTP request headers to modify. Actions defined here will modify the request headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for request headers will be executed before Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". request[] object IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. response array response is a list of HTTP response headers to modify. Actions defined here will modify the response headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for response headers will be executed after Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". response[] object IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. 15.1.20. .spec.httpHeaders.actions.request Description request is a list of HTTP request headers to modify. Actions defined here will modify the request headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for request headers will be executed before Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Type array 15.1.21. .spec.httpHeaders.actions.request[] Description IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required action name Property Type Description action object action specifies actions to perform on headers, such as setting or deleting headers. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 15.1.22. .spec.httpHeaders.actions.request[].action Description action specifies actions to perform on headers, such as setting or deleting headers. Type object Required type Property Type Description set object set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 15.1.23. .spec.httpHeaders.actions.request[].action.set Description set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 15.1.24. .spec.httpHeaders.actions.response Description response is a list of HTTP response headers to modify. Actions defined here will modify the response headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for response headers will be executed after Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Type array 15.1.25. .spec.httpHeaders.actions.response[] Description IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required action name Property Type Description action object action specifies actions to perform on headers, such as setting or deleting headers. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 15.1.26. .spec.httpHeaders.actions.response[].action Description action specifies actions to perform on headers, such as setting or deleting headers. Type object Required type Property Type Description set object set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 15.1.27. .spec.httpHeaders.actions.response[].action.set Description set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 15.1.28. .spec.httpHeaders.uniqueId Description uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. Type object Property Type Description format string format specifies the format for the injected HTTP header's value. This field has no effect unless name is specified. For the HAProxy-based ingress controller implementation, this format uses the same syntax as the HTTP log format. If the field is empty, the default value is "%{+X}o\\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid"; see the corresponding HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 name string name specifies the name of the HTTP header (for example, "unique-id") that the ingress controller should inject into HTTP requests. The field's value must be a valid HTTP header name as defined in RFC 2616 section 4.2. If the field is empty, no header is injected. 15.1.29. .spec.logging Description logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. Type object Property Type Description access object access describes how the client requests should be logged. If this field is empty, access logging is disabled. 15.1.30. .spec.logging.access Description access describes how the client requests should be logged. If this field is empty, access logging is disabled. Type object Required destination Property Type Description destination object destination is where access logs go. httpCaptureCookies `` httpCaptureCookies specifies HTTP cookies that should be captured in access logs. If this field is empty, no cookies are captured. httpCaptureHeaders object httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. httpLogFormat string httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 Note that this format only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). It does not affect the log format for TLS passthrough connections. logEmptyRequests string logEmptyRequests specifies how connections on which no request is received should be logged. Typically, these empty requests come from load balancers' health probes or Web browsers' speculative connections ("preconnect"), in which case logging these requests may be undesirable. However, these requests may also be caused by network errors, in which case logging empty requests may be useful for diagnosing the errors. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. Allowed values for this field are "Log" and "Ignore". The default value is "Log". 15.1.31. .spec.logging.access.destination Description destination is where access logs go. Type object Required type Property Type Description container object container holds parameters for the Container logging destination. Present only if type is Container. syslog object syslog holds parameters for a syslog endpoint. Present only if type is Syslog. type string type is the type of destination for logs. It must be one of the following: * Container The ingress operator configures the sidecar container named "logs" on the ingress controller pod and configures the ingress controller to write logs to the sidecar. The logs are then available as container logs. The expectation is that the administrator configures a custom logging solution that reads logs from this sidecar. Note that using container logs means that logs may be dropped if the rate of logs exceeds the container runtime's or the custom logging solution's capacity. * Syslog Logs are sent to a syslog endpoint. The administrator must specify an endpoint that can receive syslog messages. The expectation is that the administrator has configured a custom syslog instance. 15.1.32. .spec.logging.access.destination.container Description container holds parameters for the Container logging destination. Present only if type is Container. Type object Property Type Description maxLength integer maxLength is the maximum length of the log message. Valid values are integers in the range 480 to 8192, inclusive. When omitted, the default value is 1024. 15.1.33. .spec.logging.access.destination.syslog Description syslog holds parameters for a syslog endpoint. Present only if type is Syslog. Type object Required address port Property Type Description address string address is the IP address of the syslog endpoint that receives log messages. facility string facility specifies the syslog facility of log messages. If this field is empty, the facility is "local1". maxLength integer maxLength is the maximum length of the log message. Valid values are integers in the range 480 to 4096, inclusive. When omitted, the default value is 1024. port integer port is the UDP port number of the syslog endpoint that receives log messages. 15.1.34. .spec.logging.access.httpCaptureHeaders Description httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. Type object Property Type Description request `` request specifies which HTTP request headers to capture. If this field is empty, no request headers are captured. response `` response specifies which HTTP response headers to capture. If this field is empty, no response headers are captured. 15.1.35. .spec.namespaceSelector Description namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.36. .spec.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.37. .spec.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.38. .spec.nodePlacement Description nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. Type object Property Type Description nodeSelector object nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. tolerations array tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 15.1.39. .spec.nodePlacement.nodeSelector Description nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.40. .spec.nodePlacement.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.41. .spec.nodePlacement.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.42. .spec.nodePlacement.tolerations Description tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Type array 15.1.43. .spec.nodePlacement.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 15.1.44. .spec.routeAdmission Description routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. Type object Property Type Description namespaceOwnership string namespaceOwnership describes how host name claims across namespaces should be handled. Value must be one of: - Strict: Do not allow routes in different namespaces to claim the same host. - InterNamespaceAllowed: Allow routes to claim different paths of the same host name across namespaces. If empty, the default is Strict. wildcardPolicy string wildcardPolicy describes how routes with wildcard policies should be handled for the ingress controller. WildcardPolicy controls use of routes [1] exposed by the ingress controller based on the route's wildcard policy. [1] https://github.com/openshift/api/blob/master/route/v1/types.go Note: Updating WildcardPolicy from WildcardsAllowed to WildcardsDisallowed will cause admitted routes with a wildcard policy of Subdomain to stop working. These routes must be updated to a wildcard policy of None to be readmitted by the ingress controller. WildcardPolicy supports WildcardsAllowed and WildcardsDisallowed values. If empty, defaults to "WildcardsDisallowed". 15.1.45. .spec.routeSelector Description routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.46. .spec.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.47. .spec.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.48. .spec.tlsSecurityProfile Description tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: VersionTLS13 old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: VersionTLS10 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 15.1.49. .spec.tuningOptions Description tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. Type object Property Type Description clientFinTimeout string clientFinTimeout defines how long a connection will be held open while waiting for the client response to the server/backend closing the connection. If unset, the default timeout is 1s clientTimeout string clientTimeout defines how long a connection will be held open while waiting for a client response. If unset, the default timeout is 30s connectTimeout string ConnectTimeout defines the maximum time to wait for a connection attempt to a server/backend to succeed. This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". When omitted, this means the user has no opinion and the platform is left to choose a reasonable default. This default is subject to change over time. The current default is 5s. headerBufferBytes integer headerBufferBytes describes how much memory should be reserved (in bytes) for IngressController connection sessions. Note that this value must be at least 16384 if HTTP/2 is enabled for the IngressController ( https://tools.ietf.org/html/rfc7540 ). If this field is empty, the IngressController will use a default value of 32768 bytes. Setting this field is generally not recommended as headerBufferBytes values that are too small may break the IngressController and headerBufferBytes values that are too large could cause the IngressController to use significantly more memory than necessary. headerBufferMaxRewriteBytes integer headerBufferMaxRewriteBytes describes how much memory should be reserved (in bytes) from headerBufferBytes for HTTP header rewriting and appending for IngressController connection sessions. Note that incoming HTTP requests will be limited to (headerBufferBytes - headerBufferMaxRewriteBytes) bytes, meaning headerBufferBytes must be greater than headerBufferMaxRewriteBytes. If this field is empty, the IngressController will use a default value of 8192 bytes. Setting this field is generally not recommended as headerBufferMaxRewriteBytes values that are too small may break the IngressController and headerBufferMaxRewriteBytes values that are too large could cause the IngressController to use significantly more memory than necessary. healthCheckInterval string healthCheckInterval defines how long the router waits between two consecutive health checks on its configured backends. This value is applied globally as a default for all routes, but may be overridden per-route by the route annotation "router.openshift.io/haproxy.health.check.interval". Expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Setting this to less than 5s can cause excess traffic due to too frequent TCP health checks and accompanying SYN packet storms. Alternatively, setting this too high can result in increased latency, due to backend servers that are no longer available, but haven't yet been detected as such. An empty or zero healthCheckInterval means no opinion and IngressController chooses a default, which is subject to change over time. Currently the default healthCheckInterval value is 5s. Currently the minimum allowed value is 1s and the maximum allowed value is 2147483647ms (24.85 days). Both are subject to change over time. maxConnections integer maxConnections defines the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections but at the cost of additional system resources being consumed. Permitted values are: empty, 0, -1, and the range 2000-2000000. If this field is empty or 0, the IngressController will use the default value of 50000, but the default is subject to change in future releases. If the value is -1 then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. Selecting -1 (i.e., auto) will result in a large value being computed (~520000 on OpenShift >=4.10 clusters) and therefore each HAProxy process will incur significant memory usage compared to the current default of 50000. Setting a value that is greater than the current operating system limit will prevent the HAProxy process from starting. If you choose a discrete value (e.g., 750000) and the router pod is migrated to a new node, there's no guarantee that that new node has identical ulimits configured. In such a scenario the pod would fail to start. If you have nodes with different ulimits configured (e.g., different tuned profiles) and you choose a discrete value then the guidance is to use -1 and let the value be computed dynamically at runtime. You can monitor memory usage for router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}'. You can monitor memory usage of individual HAProxy processes in router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}/container_processes{container="router",namespace="openshift-ingress"}'. reloadInterval string reloadInterval defines the minimum interval at which the router is allowed to reload to accept new changes. Increasing this value can prevent the accumulation of HAProxy processes, depending on the scenario. Increasing this interval can also lessen load imbalance on a backend's servers when using the roundrobin balancing algorithm. Alternatively, decreasing this value may decrease latency since updates to HAProxy's configuration can take effect more quickly. The value must be a time duration value; see https://pkg.go.dev/time#ParseDuration . Currently, the minimum value allowed is 1s, and the maximum allowed value is 120s. Minimum and maximum allowed values may change in future versions of OpenShift. Note that if a duration outside of these bounds is provided, the value of reloadInterval will be capped/floored and not rejected (e.g. a duration of over 120s will be capped to 120s; the IngressController will not reject and replace this disallowed value with the default). A zero value for reloadInterval tells the IngressController to choose the default, which is currently 5s and subject to change without notice. This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Note: Setting a value significantly larger than the default of 5s can cause latency in observing updates to routes and their endpoints. HAProxy's configuration will be reloaded less frequently, and newly created routes will not be served until the subsequent reload. serverFinTimeout string serverFinTimeout defines how long a connection will be held open while waiting for the server/backend response to the client closing the connection. If unset, the default timeout is 1s serverTimeout string serverTimeout defines how long a connection will be held open while waiting for a server/backend response. If unset, the default timeout is 30s threadCount integer threadCount defines the number of threads created per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of more system resources being used. HAProxy currently supports up to 64 threads. If this field is empty, the IngressController will use the default value. The current default is 4 threads, but this may change in future releases. Setting this field is generally not recommended. Increasing the number of HAProxy threads allows ingress controller pods to utilize more CPU time under load, potentially starving other pods if set too high. Reducing the number of threads may cause the ingress controller to perform poorly. tlsInspectDelay string tlsInspectDelay defines how long the router can hold data to find a matching route. Setting this too short can cause the router to fall back to the default certificate for edge-terminated or reencrypt routes even when a better matching certificate could be used. If unset, the default inspect delay is 5s tunnelTimeout string tunnelTimeout defines how long a tunnel connection (including websockets) will be held open while the tunnel is idle. If unset, the default timeout is 1h 15.1.50. .status Description status is the most recently observed status of the IngressController. Type object Property Type Description availableReplicas integer availableReplicas is number of observed available replicas according to the ingress controller deployment. conditions array conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. conditions[] object OperatorCondition is just the standard condition fields. domain string domain is the actual domain in use. endpointPublishingStrategy object endpointPublishingStrategy is the actual strategy in use. namespaceSelector object namespaceSelector is the actual namespaceSelector in use. observedGeneration integer observedGeneration is the most recent generation observed. routeSelector object routeSelector is the actual routeSelector in use. selector string selector is a label selector, in string format, for ingress controller pods corresponding to the IngressController. The number of matching pods should equal the value of availableReplicas. tlsProfile object tlsProfile is the TLS connection configuration that is in effect. 15.1.51. .status.conditions Description conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. Type array 15.1.52. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 15.1.53. .status.endpointPublishingStrategy Description endpointPublishingStrategy is the actual strategy in use. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.54. .status.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.55. .status.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.56. .status.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.57. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.58. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. 15.1.59. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object 15.1.60. .status.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.61. .status.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.62. .status.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.63. .status.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.64. .status.namespaceSelector Description namespaceSelector is the actual namespaceSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.65. .status.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.66. .status.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.67. .status.routeSelector Description routeSelector is the actual routeSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.68. .status.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.69. .status.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.70. .status.tlsProfile Description tlsProfile is the TLS connection configuration that is in effect. Type object Property Type Description ciphers array (string) ciphers is used to specify the cipher algorithms that are negotiated during the TLS handshake. Operators may remove entries their operands do not support. For example, to use DES-CBC3-SHA (yaml): ciphers: - DES-CBC3-SHA minTLSVersion string minTLSVersion is used to specify the minimal version of the TLS protocol that is negotiated during the TLS handshake. For example, to use TLS versions 1.1, 1.2 and 1.3 (yaml): minTLSVersion: VersionTLS11 NOTE: currently the highest minTLSVersion allowed is VersionTLS12 15.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/ingresscontrollers GET : list objects of kind IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers DELETE : delete collection of IngressController GET : list objects of kind IngressController POST : create an IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} DELETE : delete an IngressController GET : read the specified IngressController PATCH : partially update the specified IngressController PUT : replace the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale GET : read scale of the specified IngressController PATCH : partially update scale of the specified IngressController PUT : replace scale of the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status GET : read status of the specified IngressController PATCH : partially update status of the specified IngressController PUT : replace status of the specified IngressController 15.2.1. /apis/operator.openshift.io/v1/ingresscontrollers HTTP method GET Description list objects of kind IngressController Table 15.1. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty 15.2.2. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers HTTP method DELETE Description delete collection of IngressController Table 15.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IngressController Table 15.3. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressController Table 15.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.5. Body parameters Parameter Type Description body IngressController schema Table 15.6. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 202 - Accepted IngressController schema 401 - Unauthorized Empty 15.2.3. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} Table 15.7. Global path parameters Parameter Type Description name string name of the IngressController HTTP method DELETE Description delete an IngressController Table 15.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 15.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressController Table 15.10. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressController Table 15.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.12. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressController Table 15.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.14. Body parameters Parameter Type Description body IngressController schema Table 15.15. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty 15.2.4. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale Table 15.16. Global path parameters Parameter Type Description name string name of the IngressController HTTP method GET Description read scale of the specified IngressController Table 15.17. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified IngressController Table 15.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.19. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified IngressController Table 15.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.21. Body parameters Parameter Type Description body Scale schema Table 15.22. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 15.2.5. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status Table 15.23. Global path parameters Parameter Type Description name string name of the IngressController HTTP method GET Description read status of the specified IngressController Table 15.24. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified IngressController Table 15.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.26. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified IngressController Table 15.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.28. Body parameters Parameter Type Description body IngressController schema Table 15.29. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/ingresscontroller-operator-openshift-io-v1
function::ansi_set_color
function::ansi_set_color Name function::ansi_set_color - Set the ansi Select Graphic Rendition mode. Synopsis Arguments fg Foreground color to set. Description Sends ansi code for Select Graphic Rendition mode for the given forground color. Black (30), Blue (34), Green (32), Cyan (36), Red (31), Purple (35), Brown (33), Light Gray (37).
[ "ansi_set_color(fg:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ansi-set-color
Chapter 145. KafkaNodePoolSpec schema reference
Chapter 145. KafkaNodePoolSpec schema reference Used in: KafkaNodePool Property Description replicas The number of pods in the pool. integer storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim, jbod]. EphemeralStorage , PersistentClaimStorage , JbodStorage roles The roles that the nodes in this pool will have when KRaft mode is enabled. Supported values are 'broker' and 'controller'. This field is required. When KRaft mode is disabled, the only allowed value if broker . string (one or more of [controller, broker]) array resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements jvmOptions JVM Options for pods. JvmOptions template Template for pool resources. The template allows users to specify how the resources belonging to this pool are generated. KafkaNodePoolTemplate
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkanodepoolspec-reference
Chapter 12. Installing with z/VM on IBM Z and LinuxONE
Chapter 12. Installing with z/VM on IBM Z and LinuxONE 12.1. Preparing to install with z/VM on IBM Z and LinuxONE 12.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 12.1.2. Choosing a method to install OpenShift Container Platform with z/VM on IBM Z or LinuxONE You can install a cluster with z/VM on IBM Z or LinuxONE infrastructure that you provision, by using one of the following methods: Installing a cluster with z/VM on IBM Z and LinuxONE : You can install OpenShift Container Platform with z/VM on IBM Z or LinuxONE infrastructure that you provision. Installing a cluster with z/VM on IBM Z and LinuxONE in a restricted network : You can install OpenShift Container Platform with z/VM on IBM Z or LinuxONE infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 12.2. Installing a cluster with z/VM on IBM Z and LinuxONE In OpenShift Container Platform version 4.9, you can install a cluster on IBM Z or LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z, all information in it also applies to LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 12.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using NFS for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 12.2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 12.2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 12.2.3.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.9 on the following IBM hardware: IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s LinuxONE, any version Hardware requirements The equivalent of 6 IFLs, which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.1 or later On your z/VM instance, set up: 3 guest virtual machines for OpenShift Container Platform control plane machines 2 guest virtual machines for OpenShift Container Platform compute machines 1 guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 12.2.3.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.1 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE in IBM Documentation. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 12.2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z & LinuxONE environments 12.2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. The machines are configured with static IP addresses. No DHCP server is required. Ensure that the machines have persistent IP addresses and hostnames. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 12.2.3.6.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 12.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 12.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 12.2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 12.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 12.2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 12.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 12.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 12.2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 12.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 12.8. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 12.2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 12.3. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 12.2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 12.2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 12.2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 12.2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 12.2.9. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 12.2.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.2.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.2.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.2.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.2.9.2. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{"auths": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 12 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 14 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 12.2.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.2.9.4. Configuring a three-node cluster You can optionally deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 12.2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 12.2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 12.12. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 12.13. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 12.14. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 12.15. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. Table 12.16. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 12.17. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 12.2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 12.2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional post-installation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Post-installation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=sda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 12.2.12.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 12.2.12.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following table provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set option fail_over_mac=1 in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 12.2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 12.2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 12.2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 12.2.16.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 12.2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 12.2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 12.2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.2.19. Collecting debugging information You can gather debugging information that might help you to troubleshoot and debug certain issues with an OpenShift Container Platform installation on IBM Z. Prerequisites The oc CLI tool installed. Procedure Log in to the cluster: On the node you want to gather hardware information about, start a debugging container: Change to the /host file system and start toolbox : Collect the dbginfo data: You can then retrieve the data, for example, using scp . Additional resources How to generate SOSREPORT within OpenShift4 nodes without SSH . 12.2.20. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . 12.3. Installing a cluster with z/VM on IBM Z and LinuxONE in a restricted network In OpenShift Container Platform version 4.9, you can install a cluster on IBM Z and LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z, all information in it also applies to LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 12.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using NFS for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 12.3.2. About installations in restricted networks In OpenShift Container Platform 4.9, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 12.3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 12.3.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.3.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.18. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.3.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.19. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 12.3.4.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.9 on the following IBM hardware: IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s LinuxONE, any version Hardware requirements The equivalent of 6 IFLs, which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One instance of z/VM 7.1 or later On your z/VM instance, set up: 3 guest virtual machines for OpenShift Container Platform control plane machines 2 guest virtual machines for OpenShift Container Platform compute machines 1 guest virtual machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 12.3.4.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Two or three instances of z/VM 7.1 or later for high availability On your z/VM instances, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, one per z/VM instance. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the z/VM instances. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE in IBM Documentation. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. Disk storage for the z/VM guest virtual machines FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 12.3.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. See Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z & LinuxONE environments 12.3.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 12.3.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 12.3.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 12.20. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.21. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 12.22. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 12.3.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 12.23. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 12.3.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 12.4. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 12.5. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 12.3.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Note Session persistence is not required for the API load balancer to function properly. Configure the following ports on both the front and back of the load balancers: Table 12.24. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 12.25. Application ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic 1936 The worker nodes that run the Ingress Controller pods, by default. You must configure the /healthz/ready endpoint for the ingress health check probe. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. 12.3.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Example 12.6. Sample API and application ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 In the example, the cluster name is ocp4 . 2 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 3 5 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 4 Port 22623 handles the machine config server traffic and points to the control plane machines. 6 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 7 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . 12.3.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 12.3.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 0 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 0 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 0 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 0 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 12.3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.3.8. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 12.3.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.3.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.26. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.3.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.27. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.3.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.28. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.3.8.2. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 12 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 14 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 15 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 16 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 17 Provide the imageContentSources section from the output of the command to mirror the repository. 12.3.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.3.8.4. Configuring a three-node cluster You can optionally deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 12.3.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 12.3.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 12.29. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 12.30. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 12.31. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 12.32. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. Table 12.33. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 12.34. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 12.3.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 12.3.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional post-installation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Post-installation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=sda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 12.3.11.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 12.3.11.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following table provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set option fail_over_mac=1 in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 12.3.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 12.3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.3.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.3.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Configure the Operators that are not available. 12.3.15.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 12.3.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 12.3.15.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 12.3.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 12.3.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 12.3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.3.18. Collecting debugging information You can gather debugging information that might help you to troubleshoot and debug certain issues with an OpenShift Container Platform installation on IBM Z. Prerequisites The oc CLI tool installed. Procedure Log in to the cluster: On the node you want to gather hardware information about, start a debugging container: Change to the /host file system and start toolbox : Collect the dbginfo data: You can then retrieve the data, for example, using scp . Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 12.3.19. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\": ...}' 14 sshKey: 'ssh-ed25519 AAAA...' 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc login -u <username>", "oc debug node/<nodename>", "chroot /host toolbox", "dbginfo.sh", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; master0.ocp4.example.com. IN A 192.168.1.97 5 master1.ocp4.example.com. IN A 192.168.1.98 6 master2.ocp4.example.com. IN A 192.168.1.99 7 ; worker0.ocp4.example.com. IN A 192.168.1.11 8 worker1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend stats bind *:1936 mode http log global maxconn 10 stats enable stats hide-version stats refresh 30s stats show-node stats show-desc Stats for ocp4 cluster 1 stats auth admin:ocp4 stats uri /stats listen api-server-6443 2 bind *:6443 mode tcp server bootstrap bootstrap.ocp4.example.com:6443 check inter 1s backup 3 server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 4 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 5 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 6 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 7 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 0 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 0 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 0 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 0 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 0 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture : s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OpenShiftSDN serviceNetwork: 11 - 172.30.0.0/16 platform: none: {} 12 fips: false 13 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 14 sshKey: 'ssh-ed25519 AAAA...' 15 additionalTrustBundle: | 16 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 17 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490", "rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.22.1 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.9 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.9.0 True False False 19m baremetal 4.9.0 True False False 37m cloud-credential 4.9.0 True False False 40m cluster-autoscaler 4.9.0 True False False 37m config-operator 4.9.0 True False False 38m console 4.9.0 True False False 26m csi-snapshot-controller 4.9.0 True False False 37m dns 4.9.0 True False False 37m etcd 4.9.0 True False False 36m image-registry 4.9.0 True False False 31m ingress 4.9.0 True False False 30m insights 4.9.0 True False False 31m kube-apiserver 4.9.0 True False False 26m kube-controller-manager 4.9.0 True False False 36m kube-scheduler 4.9.0 True False False 36m kube-storage-version-migrator 4.9.0 True False False 37m machine-api 4.9.0 True False False 29m machine-approver 4.9.0 True False False 37m machine-config 4.9.0 True False False 36m marketplace 4.9.0 True False False 37m monitoring 4.9.0 True False False 29m network 4.9.0 True False False 38m node-tuning 4.9.0 True False False 37m openshift-apiserver 4.9.0 True False False 32m openshift-controller-manager 4.9.0 True False False 30m openshift-samples 4.9.0 True False False 32m operator-lifecycle-manager 4.9.0 True False False 37m operator-lifecycle-manager-catalog 4.9.0 True False False 37m operator-lifecycle-manager-packageserver 4.9.0 True False False 32m service-ca 4.9.0 True False False 38m storage 4.9.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "oc login -u <username>", "oc debug node/<nodename>", "chroot /host toolbox", "dbginfo.sh" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-with-z-vm-on-ibm-z-and-linuxone
Chapter 9. Manually inactivating users and roles
Chapter 9. Manually inactivating users and roles manually-inactivating-users-and-roles In Directory Server, you can temporarily inactivate a single user account or a set of accounts. Once an account is inactivated, a user cannot bind to the directory. The authentication operation fails. 9.1. Inactivation and activation of users and roles using the command line You can manually inactivate users and roles using the command line or the operational attribute. Roles behave as both a static and a dynamic group. With a group, entries are added to a group entry as members. With a role, the role attribute is added to an entry and then that attribute is used to identify members in the role entry automatically. Users and roles are inactivated executing the same procedures. However, when a role is inactivated, the members of the role are inactivated, not the role entry itself. To inactivate users and roles, execute the following commands in the command line: For inactivation of a user account: # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " account lock " uid=user_name,ou=People,dc=example,dc=com " For inactivation of a role: # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " role lock " cn=Marketing,ou=People,dc=example,dc=com " To activate users and roles, execute the following commands in the command line: For activation of a user account: # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " account unlock " uid=user_name,ou=People,dc=example,dc=com " For activation of a role: # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " role unlock " cn=Marketing,ou=People,dc=example,dc=com " Optionally, instead of using the commands, you can add the operational attribute nsAccountLock to the entry. When an entry contains the nsAccountLock attribute with a value of true , the server rejects the bind. Additional resources Using roles in Directory Server Managing directory attributes and values Configuring directory databases 9.2. Commands for displaying the status of an account or a role You can display the status of an account or a role in Directory Server using the corresponding commands in the command line. Commands for displaying the status Display the status of an account: # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " account entry-status " uid=user_name,ou=People,dc=example,dc=com " Entry DN: uid=user_name,ou=People,dc=example,dc=com Entry Creation Date: 20210813085535Z (2021-08-13 08:55:35) Entry Modification Date: 20210813085535Z (2021-08-13 08:55:35) Entry State: activated Optional: The -V option displays additional details. Example 9.1. Detailed output for an active account # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " account entry-status " uid=user_name,ou=People,dc=example,dc=com " -V Entry DN: uid=user_name,ou=People,dc=example,dc=com Entry Creation Date: 20210824160645Z (2021-08-24 16:06:45) Entry Modification Date: 20210824160645Z (2021-08-24 16:06:45) Entry Last Login Date: 20210824160645Z (2021-08-24 16:06:45) Entry Time Until Inactive: 2 seconds (2021-08-24 16:07:45) Entry State: activated Example 9.2. Detailed output for an inactive account # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " account entry-status " uid=user_name,ou=People,dc=example,dc=com " -V Entry DN: uid=user_name,ou=People,dc=example,dc=com Entry Creation Date: 20210824160645Z (2021-08-24 16:06:45) Entry Modification Date: 20210824160645Z (2021-08-24 16:06:45) Entry Last Login Date: 20210824160645Z (2021-08-24 16:06:45) Entry Time Since Inactive: 3 seconds (2021-08-24 16:07:45) Entry State: inactivity limit exceeded Display the status of a role: # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " role entry-status " cn=Marketing,ou=People,dc=example,dc=com " Entry DN: cn=Marketing,ou=people,dc=example,dc=com Entry State: activated Display the status of a sub-tree: # dsidm -D " cn=Directory Manager " ldap://server.example.com -b " dc=example,dc=com " account subtree-status " ou=People,dc=example,dc=com " -f " (uid=*) " -V -o " 2021-08-25T14:30:30 " To filter the results of the search in a sub-tree, you can use: The -f option to set the search filter The -s option to set the search scope The -i option to return only inactive accounts The -o option to return only accounts which will be inactive before the specified date YYYY-MM-DDTHH:MM:SS
[ "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" account lock \" uid=user_name,ou=People,dc=example,dc=com \"", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" role lock \" cn=Marketing,ou=People,dc=example,dc=com \"", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" account unlock \" uid=user_name,ou=People,dc=example,dc=com \"", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" role unlock \" cn=Marketing,ou=People,dc=example,dc=com \"", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" account entry-status \" uid=user_name,ou=People,dc=example,dc=com \" Entry DN: uid=user_name,ou=People,dc=example,dc=com Entry Creation Date: 20210813085535Z (2021-08-13 08:55:35) Entry Modification Date: 20210813085535Z (2021-08-13 08:55:35) Entry State: activated", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" account entry-status \" uid=user_name,ou=People,dc=example,dc=com \" -V Entry DN: uid=user_name,ou=People,dc=example,dc=com Entry Creation Date: 20210824160645Z (2021-08-24 16:06:45) Entry Modification Date: 20210824160645Z (2021-08-24 16:06:45) Entry Last Login Date: 20210824160645Z (2021-08-24 16:06:45) Entry Time Until Inactive: 2 seconds (2021-08-24 16:07:45) Entry State: activated", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" account entry-status \" uid=user_name,ou=People,dc=example,dc=com \" -V Entry DN: uid=user_name,ou=People,dc=example,dc=com Entry Creation Date: 20210824160645Z (2021-08-24 16:06:45) Entry Modification Date: 20210824160645Z (2021-08-24 16:06:45) Entry Last Login Date: 20210824160645Z (2021-08-24 16:06:45) Entry Time Since Inactive: 3 seconds (2021-08-24 16:07:45) Entry State: inactivity limit exceeded", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" role entry-status \" cn=Marketing,ou=People,dc=example,dc=com \" Entry DN: cn=Marketing,ou=people,dc=example,dc=com Entry State: activated", "dsidm -D \" cn=Directory Manager \" ldap://server.example.com -b \" dc=example,dc=com \" account subtree-status \" ou=People,dc=example,dc=com \" -f \" (uid=*) \" -V -o \" 2021-08-25T14:30:30 \"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/user_management_and_authentication/manually-inactivating-users-and-roles
Security hardening
Security hardening Red Hat Enterprise Linux 8 Enhancing security of Red Hat Enterprise Linux 8 systems Red Hat Customer Content Services
[ "yum update", "systemctl start firewalld systemctl enable firewalld", "systemctl disable cups", "systemctl list-units | grep service", "fips-mode-setup --check FIPS mode is enabled.", "fips-mode-setup --enable Kernel initramdisks are being regenerated. This might take some time. Setting system policy to FIPS Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place. FIPS mode will be enabled. Please reboot the system for the setting to take effect.", "reboot", "fips-mode-setup --check FIPS mode is enabled.", "update-crypto-policies --set FIPS", "update-crypto-policies --show DEFAULT", "update-crypto-policies --set <POLICY> <POLICY>", "reboot", "update-crypto-policies --show <POLICY>", "update-crypto-policies --set LEGACY Setting system policy to LEGACY", "wget --secure-protocol= TLSv1_1 --ciphers=\" SECURE128 \" https://example.com", "curl https://example.com --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA'", "cd /etc/crypto-policies/policies/modules/", "touch MYCRYPTO-1 .pmod touch SCOPES-AND-WILDCARDS .pmod", "vi MYCRYPTO-1 .pmod", "min_rsa_size = 3072 hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512", "vi SCOPES-AND-WILDCARDS .pmod", "Disable the AES-128 cipher, all modes cipher = -AES-128-* Disable CHACHA20-POLY1305 for the TLS protocol (OpenSSL, GnuTLS, NSS, and OpenJDK) cipher@TLS = -CHACHA20-POLY1305 Allow using the FFDHE-1024 group with the SSH protocol (libssh and OpenSSH) group@SSH = FFDHE-1024+ Disable all CBC mode ciphers for the SSH protocol (libssh and OpenSSH) cipher@SSH = -*-CBC Allow the AES-256-CBC cipher in applications using libssh cipher@libssh = AES-256-CBC+", "update-crypto-policies --set DEFAULT: MYCRYPTO-1 : SCOPES-AND-WILDCARDS", "reboot", "cat /etc/crypto-policies/state/CURRENT.pol | grep rsa_size min_rsa_size = 3072", "update-crypto-policies --set DEFAULT:NO-SHA1", "reboot", "cd /etc/crypto-policies/policies/ touch MYPOLICY .pol", "cp /usr/share/crypto-policies/policies/ DEFAULT .pol /etc/crypto-policies/policies/ MYPOLICY .pol", "vi /etc/crypto-policies/policies/ MYPOLICY .pol", "update-crypto-policies --set MYPOLICY", "reboot", "--- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active", "ansible-playbook --syntax-check ~/verify_playbook.yml", "ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { \"crypto_policies_active\": \"FUTURE\" }", "cat /usr/share/p11-kit/modules/opensc.module module: opensc-pkcs11.so", "ssh-keygen -D pkcs11: > keys.pub", "ssh-copy-id -f -i keys.pub <[email protected]>", "ssh -i \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i \"pkcs11:id=%01\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i pkcs11: <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "cat ~/.ssh/config IdentityFile \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" ssh <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "wget --private-key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --certificate 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/", "curl --key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --cert 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/", "SSLCertificateFile \"pkcs11:id=%01;token=softhsm;type=cert\" SSLCertificateKeyFile \"pkcs11:id=%01;token=softhsm;type=private?pin-value=111111\"", "ssl_certificate /path/to/cert.pem ssl_certificate_key \"engine:pkcs11:pkcs11:token=softhsm;id=%01;type=private?pin-value=111111\";", "Authentication is required to access the PC/SC daemon", "journalctl -b | grep pcsc Process 3087 (user: 1001) is NOT authorized for action: access_pcsc", "journalctl -u polkit polkitd[NNN]: Error compiling script /etc/polkit-1/rules.d/00-debug-pcscd.rules polkitd[NNN]: Operator of unix-session:c2 FAILED to authenticate to gain authorization for action org.debian.pcsc-lite.access_pcsc for unix-process:4800:14441 [/usr/libexec/gsd-smartcard] (owned by unix-user:group)", "#!/bin/bash cd /proc for p in [0-9]* do if grep libpcsclite.so.1.0.0 USDp/maps &> /dev/null then echo -n \"process: \" cat USDp/cmdline echo \" (USDp)\" fi done", "./pcsc-apps.sh process: /usr/libexec/gsd-smartcard (3048) enable-sync --auto-ssl-client-auth --enable-crashpad (4828)", "touch /etc/polkit-1/rules.d/00-test.rules", "vi /etc/polkit-1/rules.d/00-test.rules", "polkit.addRule(function(action, subject) { if (action.id == \"org.debian.pcsc-lite.access_pcsc\" || action.id == \"org.debian.pcsc-lite.access_card\") { polkit.log(\"action=\" + action); polkit.log(\"subject=\" + subject); } });", "systemctl restart pcscd.service pcscd.socket polkit.service", "journalctl -u polkit --since \"1 hour ago\" polkitd[1224]: <no filename>:4: action=[Action id='org.debian.pcsc-lite.access_pcsc'] polkitd[1224]: <no filename>:5: subject=[Subject pid=2020481 user=user' groups=user,wheel,mock,wireshark seat=null session=null local=true active=true]", "wget -O - https://www.redhat.com/security/data/oval/v2/RHEL8/rhel-8.oval.xml.bz2 | bzip2 --decompress > rhel-8.oval.xml", "oscap oval eval --report vulnerability.html rhel-8.oval.xml", "firefox vulnerability.html &", "wget -O - https://www.redhat.com/security/data/oval/v2/RHEL8/rhel-8.oval.xml.bz2 | bzip2 --decompress > rhel-8.oval.xml", "oscap-ssh <username> @ <hostname> <port> oval eval --report <scan-report.html> rhel-8.oval.xml", "Data stream ├── xccdf | ├── benchmark | ├── profile | | ├──rule reference | | └──variable | ├── rule | ├── human readable data | ├── oval reference ├── oval ├── ocil reference ├── ocil ├── cpe reference └── cpe └── remediation", "ls /usr/share/xml/scap/ssg/content/ ssg-firefox-cpe-dictionary.xml ssg-rhel6-ocil.xml ssg-firefox-cpe-oval.xml ssg-rhel6-oval.xml ... ssg-rhel6-ds-1.2.xml ssg-rhel8-oval.xml ssg-rhel8-ds.xml ssg-rhel8-xccdf.xml ...", "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml Profiles: ... Title: Health Insurance Portability and Accountability Act (HIPAA) Id: xccdf_org.ssgproject.content_profile_hipaa Title: PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 Id: xccdf_org.ssgproject.content_profile_pci-dss Title: OSPP - Protection Profile for General Purpose Operating Systems Id: xccdf_org.ssgproject.content_profile_ospp ...", "oscap info --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml ... Profile Title: Health Insurance Portability and Accountability Act (HIPAA) Description: The HIPAA Security Rule establishes U.S. national standards to protect individuals' electronic personal health information that is created, received, used, or maintained by a covered entity. ...", "oscap xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "oscap-ssh <username> @ <hostname> <port> xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "oscap xccdf eval --profile <profileID> --remediate /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "oscap xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "ansible-playbook -i localhost, -c local /usr/share/scap-security-guide/ansible/rhel8-playbook-hipaa.yml", "oscap xccdf eval --profile hipaa --report <scan-report.html> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "oscap info <hipaa-results.xml>", "oscap xccdf generate fix --fix-type ansible --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.yml> <hipaa-results.xml>", "oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "oscap info <hipaa-results.xml>", "oscap xccdf generate fix --fix-type bash --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.sh> <hipaa-results.xml>", "scap-workbench &", "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "wget -O - https://www.redhat.com/security/data/oval/v2/RHEL8/rhel-8.oval.xml.bz2 | bzip2 --decompress > rhel-8.oval.xml", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 096cae65a207 7 weeks ago 239 MB", "oscap-podman 096cae65a207 oval eval --report vulnerability.html rhel-8.oval.xml", "firefox vulnerability.html &", "oscap-podman <ID> xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "firefox <scan-report.html> &amp;", "yum install aide", "aide --init Start timestamp: 2024-07-08 10:39:23 -0400 (AIDE 0.16) AIDE initialized database at /var/lib/aide/aide.db.new.gz Number of entries: 55856 --------------------------------------------------- The attributes of the (uncompressed) database(s): --------------------------------------------------- /var/lib/aide/aide.db.new.gz ... SHA512 : mZaWoGzL2m6ZcyyZ/AXTIowliEXWSZqx IFYImY4f7id4u+Bq8WeuSE2jasZur/A4 FPBFaBkoCFHdoE/FW/V94Q==", "mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz", "aide --check Start timestamp: 2024-07-08 10:43:46 -0400 (AIDE 0.16) AIDE found differences between database and filesystem!! Summary: Total number of entries: 55856 Added entries: 0 Removed entries: 0 Changed entries: 1 --------------------------------------------------- Changed entries: --------------------------------------------------- f ... ..S : /root/.viminfo --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /root/.viminfo SELinux : system_u:object_r:admin_home_t:s | unconfined_u:object_r:admin_home 0 | _t:s0 ...", "05 4 * * * root /usr/sbin/aide --check", "aide --update", "cat /sys/class/tpm/tpm0/tpm_version_major 2", "modprobe trusted", "TPM_DEVICE=/dev/tpm0 tsscreateprimary -hi o -st Handle 80000000 TPM_DEVICE=/dev/tpm0 tssevictcontrol -hi o -ho 80000000 -hp 81000001", "tpm2_createprimary --key-algorithm=rsa2048 --key-context=key.ctxt name-alg: value: sha256 raw: 0xb ... sym-keybits: 128 rsa: xxxxxx... tpm2_evictcontrol -c key.ctxt 0x81000001 persistentHandle: 0x81000001 action: persisted", "keyctl add trusted kmk \"new 32 keyhandle=0x81000001\" @u 642500861", "keyctl add trusted kmk \"new 32\" @u", "keyctl show Session Keyring -3 --alswrv 500 500 keyring: ses 97833714 --alswrv 500 -1 \\ keyring: uid.1000 642500861 --alswrv 500 500 \\ trusted: kmk", "keyctl pipe 642500861 > kmk.blob", "keyctl add trusted kmk \"load `cat kmk.blob`\" @u 268728824", "keyctl add encrypted encr-key \"new trusted:kmk 32\" @u 159771175", "modprobe encrypted-keys", "keyctl add user kmk-user \"USD(dd if=/dev/urandom bs=1 count=32 2>/dev/null)\" @u 427069434", "keyctl add encrypted encr-key \"new user:kmk-user 32\" @u 1012412758", "keyctl list @u 2 keys in keyring: 427069434: --alswrv 1000 1000 user: kmk-user 1012412758: --alswrv 1000 1000 encrypted: encr-key", "mount securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)", "grep < options > pattern < files >", "dmesg | grep -i -e EVM -e IMA -w [ 0.598533] ima: No TPM chip found, activating TPM-bypass! [ 0.599435] ima: Allocated hash algorithm: sha256 [ 0.600266] ima: No architecture policies found [ 0.600813] evm: Initialising EVM extended attributes: [ 0.601581] evm: security.selinux [ 0.601963] evm: security.ima [ 0.602353] evm: security.capability [ 0.602713] evm: HMAC attrs: 0x1 [ 1.455657] systemd[1]: systemd 239 (239-74.el8_8) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy) [ 2.532639] systemd[1]: systemd 239 (239-74.el8_8) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy)", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"ima_policy=appraise_tcb ima_appraise=fix evm=fix\"", "cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-167.el8.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet ima_policy=appraise_tcb ima_appraise=fix evm=fix", "keyctl add user kmk \"USD(dd if=/dev/urandom bs=1 count=32 2> /dev/null)\" @u 748544121", "keyctl add encrypted evm-key \"new user:kmk 64\" @u 641780271", "mkdir -p /etc/keys/", "keyctl pipe USD(keyctl search @u user kmk) > /etc/keys/kmk", "keyctl pipe USD(keyctl search @u encrypted evm-key) > /etc/keys/evm-key", "keyctl show Session Keyring 974575405 --alswrv 0 0 keyring: ses 299489774 --alswrv 0 65534 \\ keyring: uid.0 748544121 --alswrv 0 0 \\ user: kmk 641780271 --alswrv 0 0 \\_ encrypted: evm-key ls -l /etc/keys/ total 8 -rw-r--r--. 1 root root 246 Jun 24 12:44 evm-key -rw-r--r--. 1 root root 32 Jun 24 12:43 kmk", "keyctl add user kmk \"USD(cat /etc/keys/kmk)\" @u 451342217", "keyctl add encrypted evm-key \"load USD(cat /etc/keys/evm-key)\" @u 924537557", "echo 1 > /sys/kernel/security/evm", "find / -fstype xfs -type f -uid 0 -exec head -n 1 '{}' >/dev/null \\;", "dmesg | tail -1 [... ] evm: key initialized", "echo < Test_text > > test_file", "getfattr -m . -d test_file file: test_file security.evm=0sAnDIy4VPA0HArpPO/EqiutnNyBql security.ima=0sAQOEDeuUnWzwwKYk+n66h/vby3eD", "umount /dev/mapper/vg00-lv00", "lvextend -L+ 32M /dev/mapper/vg00-lv00", "cryptsetup reencrypt --encrypt --init-only --reduce-device-size 32M /dev/mapper/ vg00-lv00 lv00_encrypted /dev/mapper/ lv00_encrypted is now active and ready for online encryption.", "mount /dev/mapper/ lv00_encrypted /mnt/lv00_encrypted", "cryptsetup luksUUID /dev/mapper/ vg00-lv00 a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325", "vi /etc/crypttab lv00_encrypted UUID= a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 none", "dracut -f --regenerate-all", "blkid -p /dev/mapper/ lv00_encrypted /dev/mapper/ lv00-encrypted : UUID=\" 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 \" BLOCK_SIZE=\"4096\" TYPE=\"xfs\" USAGE=\"filesystem\"", "vi /etc/fstab UUID= 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 /home auto rw,user,auto 0", "cryptsetup reencrypt --resume-only /dev/mapper/ vg00-lv00 Enter passphrase for /dev/mapper/ vg00-lv00 : Auto-detected active dm device ' lv00_encrypted ' for data device /dev/mapper/ vg00-lv00 . Finished, time 00:31.130, 10272 MiB written, speed 330.0 MiB/s", "cryptsetup luksDump /dev/mapper/ vg00-lv00 LUKS header information Version: 2 Epoch: 4 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 [...]", "cryptsetup status lv00_encrypted /dev/mapper/ lv00_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/mapper/ vg00-lv00", "umount /dev/ nvme0n1p1", "cryptsetup reencrypt --encrypt --init-only --header /home/header /dev/ nvme0n1p1 nvme_encrypted WARNING! ======== Header file does not exist, do you want to create it? Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /home/header : Verify passphrase: /dev/mapper/ nvme_encrypted is now active and ready for online encryption.", "mount /dev/mapper/ nvme_encrypted /mnt/nvme_encrypted", "cryptsetup reencrypt --resume-only --header /home/header /dev/ nvme0n1p1 Enter passphrase for /dev/ nvme0n1p1 : Auto-detected active dm device 'nvme_encrypted' for data device /dev/ nvme0n1p1 . Finished, time 00m51s, 10 GiB written, speed 198.2 MiB/s", "cryptsetup luksDump /home/header LUKS header information Version: 2 Epoch: 88 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: c4f5d274-f4c0-41e3-ac36-22a917ab0386 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 0 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]", "cryptsetup status nvme_encrypted /dev/mapper/ nvme_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1", "cryptsetup luksFormat /dev/ nvme0n1p1 WARNING! ======== This will overwrite data on /dev/nvme0n1p1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /dev/ nvme0n1p1 : Verify passphrase:", "cryptsetup open /dev/ nvme0n1p1 nvme0n1p1_encrypted Enter passphrase for /dev/ nvme0n1p1 :", "mkfs -t ext4 /dev/mapper/ nvme0n1p1_encrypted", "mount /dev/mapper/ nvme0n1p1_encrypted mount-point", "cryptsetup luksDump /dev/ nvme0n1p1 LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 34ce4870-ffdf-467c-9a9e-345a53ed8a25 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]", "cryptsetup status nvme0n1p1_encrypted /dev/mapper/ nvme0n1p1_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1 sector size: 512 offset: 32768 sectors size: 20938752 sectors mode: read/write", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "luks_password: <password>", "--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: \"{{ luks_password }}\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5c", "ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c' /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb' LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 4e4e7970-1822-470e-b55a-e91efe5d0f5c Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes]", "yum install tang", "semanage port -a -t tangd_port_t -p tcp 7500", "firewall-cmd --add-port= 7500 /tcp firewall-cmd --runtime-to-permanent", "systemctl enable tangd.socket", "systemctl edit tangd.socket", "[Socket] ListenStream= ListenStream= 7500", "systemctl daemon-reload", "systemctl show tangd.socket -p Listen Listen=[::]:7500 (Stream)", "systemctl restart tangd.socket", "echo test | clevis encrypt tang '{\"url\":\" <tang.server.example.com:7500> \"}' -y | clevis decrypt test", "cd /var/db/tang ls -l -rw-r--r--. 1 root root 349 Feb 7 14:55 UV6dqXSwe1bRKG3KbJmdiR020hY.jwk -rw-r--r--. 1 root root 354 Feb 7 14:55 y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk mv UV6dqXSwe1bRKG3KbJmdiR020hY.jwk .UV6dqXSwe1bRKG3KbJmdiR020hY.jwk mv y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk .y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk", "ls -l total 0", "/usr/libexec/tangd-keygen /var/db/tang ls /var/db/tang 3ZWS6-cDrCG61UPJS2BMmPU4I54.jwk zyLuX6hijUy_PSeUEFDi7hi38.jwk", "tang-show-keys 7500 3ZWS6-cDrCG61UPJS2BMmPU4I54", "clevis luks list -d /dev/sda2 1: tang '{\"url\":\" http://tang.srv \"}' clevis luks report -d /dev/sda2 -s 1 Report detected that some keys were rotated. Do you want to regenerate luks metadata with \"clevis luks regen -d /dev/sda2 -s 1\"? [ynYN]", "clevis luks regen -d /dev/sda2 -s 1", "cd /var/db/tang rm .*.jwk", "tang-show-keys 7500 x100_1k6GPiDOaMlL3WbpCjHOy9ul1bSfdhI3M08wO0", "lsinitrd | grep clevis-luks lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path ...", "clevis encrypt tang '{\"url\":\" http://tang.srv:port \"}' < input-plain.txt > secret.jwe The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y", "curl -sfg http://tang.srv:port /adv -o adv.jws", "echo 'hello' | clevis encrypt tang '{\"url\":\" http://tang.srv:port \",\"adv\":\" adv.jws \"}'", "clevis decrypt < secret.jwe > output-plain.txt", "clevis encrypt tpm2 '{}' < input-plain.txt > secret.jwe", "clevis encrypt tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\"}' < input-plain.txt > secret.jwe", "clevis decrypt < secret.jwe > output-plain.txt", "clevis encrypt tpm2 '{\"pcr_bank\":\"sha256\",\"pcr_ids\":\"0,7\"}' < input-plain.txt > secret.jwe", "clevis encrypt tang Usage: clevis encrypt tang CONFIG < PLAINTEXT > JWE", "yum install clevis-luks", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 11G 0 part └─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt ├─rhel-root 253:0 0 9.8G 0 lvm / └─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]", "clevis luks bind -d /dev/sda2 tang '{\"url\":\" http://tang.srv \"}' The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y You are about to initialize a LUKS device for metadata storage. Attempting to initialize it may result in data loss if data was already written into the LUKS header gap in a different format. A backup is advised before initialization is performed. Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:", "yum install clevis-dracut", "dracut -fv --regenerate-all --hostonly-cmdline", "echo \"hostonly_cmdline=yes\" > /etc/dracut.conf.d/clevis.conf dracut -fv --regenerate-all", "grubby --update-kernel=ALL --args=\"rd.neednet=1\"", "clevis luks list -d /dev/sda2 1: tang '{\"url\":\"http://tang.srv:port\"}'", "lsinitrd | grep clevis-luks lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path ...", "dracut -fv --regenerate-all --kernel-cmdline \"ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100\"", "cat /etc/dracut.conf.d/static_ip.conf kernel_cmdline=\"ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100\" dracut -fv --regenerate-all", "yum install clevis-luks", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 11G 0 part └─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt ├─rhel-root 253:0 0 9.8G 0 lvm / └─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]", "clevis luks bind -d /dev/sda2 tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\"}' Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:", "clevis luks bind -d /dev/sda2 tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\",\"pcr_bank\":\"sha256\",\"pcr_ids\":\"0,1\"}'", "yum install clevis-dracut dracut -fv --regenerate-all", "clevis luks list -d /dev/sda2 1: tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\"}'", "clevis luks unbind -d /dev/sda2 -s 1", "cryptsetup luksDump /dev/sda2 LUKS header information Version: 2 Keyslots: 0: luks2 1: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Tokens: 0: clevis Keyslot: 1", "cryptsetup token remove --token-id 0 /dev/sda2", "luksmeta wipe -d /dev/sda2 -s 1", "cryptsetup luksKillSlot /dev/sda2 1", "part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --grow --encrypted --passphrase=temppass", "part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --size=2048 --encrypted --passphrase=temppass part /var --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /tmp --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /home --fstype=\"xfs\" --ondisk=vda --size=2048 --grow --encrypted --passphrase=temppass part /var/log --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /var/log/audit --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass", "%packages clevis-dracut clevis-luks clevis-systemd %end", "%post clevis luks bind -y -k - -d /dev/vda2 tang '{\"url\":\"http://tang.srv\"}' <<< \"temppass\" cryptsetup luksRemoveKey /dev/vda2 <<< \"temppass\" dracut -fv --regenerate-all %end", "%post curl -sfg http://tang.srv/adv -o adv.jws clevis luks bind -f -k - -d /dev/vda2 tang '{\"url\":\"http://tang.srv\",\"adv\":\"adv.jws\"}' <<< \"temppass\" cryptsetup luksRemoveKey /dev/vda2 <<< \"temppass\" dracut -fv --regenerate-all %end", "yum install clevis-udisks2", "clevis luks bind -d /dev/sdb1 tang '{\"url\":\" http://tang.srv \"}'", "clevis luks unlock -d /dev/sdb1", "clevis luks bind -d /dev/sda1 sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\" http://tang1.srv \"},{\"url\":\" http://tang2.srv \"}]}}'", "{ \"t\":1, \"pins\":{ \"tang\":[ { \"url\":\"http://tang1.srv\" }, { \"url\":\"http://tang2.srv\" } ] } }", "clevis luks bind -d /dev/sda1 sss '{\"t\":2,\"pins\":{\"tang\":[{\"url\":\" http://tang1.srv \"}], \"tpm2\": {\"pcr_ids\":\"0,7\"}}}'", "{ \"t\":2, \"pins\":{ \"tang\":[ { \"url\":\"http://tang1.srv\" } ], \"tpm2\":{ \"pcr_ids\":\"0,7\" } } }", "podman pull registry.redhat.io/rhel8/tang", "podman run -d -p 7500:7500 -v tang-keys:/var/db/tang --name tang registry.redhat.io/rhel8/tang", "podman run --rm -v tang-keys:/var/db/tang registry.redhat.io/rhel8/tang tangd-rotate-keys -v -d /var/db/tang Rotated key 'rZAMKAseaXBe0rcKXL1hCCIq-DY.jwk' -> .'rZAMKAseaXBe0rcKXL1hCCIq-DY.jwk' Rotated key 'x1AIpc6WmnCU-CabD8_4q18vDuw.jwk' -> .'x1AIpc6WmnCU-CabD8_4q18vDuw.jwk' Created new key GrMMX_WfdqomIU_4RyjpcdlXb0E.jwk Created new key _dTTfn17sZZqVAp80u3ygFDHtjk.jwk Keys rotated successfully.", "echo test | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' | clevis decrypt The advertisement contains the following signing keys: x1AIpc6WmnCU-CabD8_4q18vDuw Do you wish to trust these keys? [ynYN] y test", "--- - name: Deploy a Tang server hosts: tang.server.example.com tasks: - name: Install and configure periodic key rotation ansible.builtin.include_role: name: rhel-system-roles.nbde_server vars: nbde_server_rotate_keys: yes nbde_server_manage_firewall: true nbde_server_manage_selinux: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'echo test | clevis encrypt tang '{\"url\":\" <tang.server.example.com> \"}' -y | clevis decrypt' test", "--- - name: Configure clients for unlocking of encrypted volumes by Tang servers hosts: managed-node-01.example.com tasks: - name: Create NBDE client bindings ansible.builtin.include_role: name: rhel-system-roles.nbde_client vars: nbde_client_bindings: - device: /dev/rhel/root encryption_key_src: /etc/luks/keyfile nbde_client_early_boot: true state: present servers: - http://server1.example.com - http://server2.example.com - device: /dev/rhel/swap encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'clevis luks list -d /dev/rhel/root' 1: tang '{\"url\":\" <http://server1.example.com/> \"}' 2: tang '{\"url\":\" <http://server2.example.com/> \"}'", "ansible managed-node-01.example.com -m command -a 'lsinitrd | grep clevis-luks' lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path ...", "clients: managed-node-01.example.com: ip_v4: 192.0.2.1 gateway_v4: 192.0.2.254 netmask_v4: 255.255.255.0 interface: enp1s0 managed-node-02.example.com: ip_v4: 192.0.2.2 gateway_v4: 192.0.2.254 netmask_v4: 255.255.255.0 interface: enp1s0", "- name: Configure clients for unlocking of encrypted volumes by Tang servers hosts: managed-node-01.example.com,managed-node-02.example.com vars_files: - ~/static-ip-settings-clients.yml tasks: - name: Create NBDE client bindings ansible.builtin.include_role: name: rhel-system-roles.network vars: nbde_client_bindings: - device: /dev/rhel/root encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com - device: /dev/rhel/swap encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com - name: Configure a Clevis client with static IP address during early boot ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_settings: - kernel: ALL options: - name: ip value: \"{{ clients[inventory_hostname]['ip_v4'] }}::{{ clients[inventory_hostname]['gateway_v4'] }}:{{ clients[inventory_hostname]['netmask_v4'] }}::{{ clients[inventory_hostname]['interface'] }}:none\"", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "# service auditd start", "# systemctl enable auditd", "auditctl -w /etc/ssh/sshd_config -p warx -k sshd_config", "USD cat /etc/ssh/sshd_config", "type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm=\"cat\" exe=\"/bin/cat\" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=\"sshd_config\" type=CWD msg=audit(1364481363.243:24287): cwd=\"/home/shadowman\" type=PATH msg=audit(1364481363.243:24287): item=0 name=\"/etc/ssh/sshd_config\" inode=409248 dev=fd:00 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 type=PROCTITLE msg=audit(1364481363.243:24287) : proctitle=636174002F6574632F7373682F737368645F636F6E666967", "# ausearch --interpret --exit -13", "# find / -inum 409248 -print /etc/ssh/sshd_config", "auditctl -w /etc/passwd -p wa -k passwd_changes", "auditctl -w /etc/selinux/ -p wa -k selinux_changes", "auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change", "auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete", "auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id", "auditctl -R /usr/share/audit/sample-rules/30-stig.rules", "cd /usr/share/audit/sample-rules/ cp 10-base-config.rules 30-stig.rules 31-privileged.rules 99-finalize.rules /etc/audit/rules.d/ augenrules --load", "augenrules --load /sbin/augenrules: No change No rules enabled 1 failure 1 pid 742 rate_limit 0", "cp -f /usr/lib/systemd/system/auditd.service /etc/systemd/system/", "vi /etc/systemd/system/auditd.service", "#ExecStartPost=-/sbin/augenrules --load ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules", "systemctl daemon-reload", "service auditd restart", "cp /usr/share/audit/sample-rules/44-installers.rules /etc/audit/rules.d/", "-a always,exit -F perm=x -F path=/usr/bin/dnf-3 -F key=software-installer -a always,exit -F perm=x -F path=/usr/bin/yum -F", "augenrules --load", "auditctl -l -p x-w /usr/bin/dnf-3 -k software-installer -p x-w /usr/bin/yum -k software-installer -p x-w /usr/bin/pip -k software-installer -p x-w /usr/bin/npm -k software-installer -p x-w /usr/bin/cpan -k software-installer -p x-w /usr/bin/gem -k software-installer -p x-w /usr/bin/luarocks -k software-installer", "yum reinstall -y vim-enhanced", "ausearch -ts recent -k software-installer ---- time->Thu Dec 16 10:33:46 2021 type=PROCTITLE msg=audit(1639668826.074:298): proctitle=2F7573722F6C6962657865632F706C6174666F726D2D707974686F6E002F7573722F62696E2F646E66007265696E7374616C6C002D790076696D2D656E68616E636564 type=PATH msg=audit(1639668826.074:298): item=2 name=\"/lib64/ld-linux-x86-64.so.2\" inode=10092 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:ld_so_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1639668826.074:298): item=1 name=\"/usr/libexec/platform-python\" inode=4618433 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:bin_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1639668826.074:298): item=0 name=\"/usr/bin/dnf\" inode=6886099 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:rpm_exec_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=CWD msg=audit(1639668826.074:298): cwd=\"/root\" type=EXECVE msg=audit(1639668826.074:298): argc=5 a0=\"/usr/libexec/platform-python\" a1=\"/usr/bin/dnf\" a2=\"reinstall\" a3=\"-y\" a4=\"vim-enhanced\" type=SYSCALL msg=audit(1639668826.074:298): arch=c000003e syscall=59 success=yes exit=0 a0=55c437f22b20 a1=55c437f2c9d0 a2=55c437f2aeb0 a3=8 items=3 ppid=5256 pid=5375 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=3 comm=\"dnf\" exe=\"/usr/libexec/platform-python3.6\" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=\"software-installer\"", "ausearch -m USER_LOGIN -ts ' 12/02/2020 ' ' 18:00:00 ' -sv no time->Mon Nov 22 07:33:22 2021 type=USER_LOGIN msg=audit(1637584402.416:92): pid=1939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct=\"(unknown)\" exe=\"/usr/sbin/sshd\" hostname=? addr=10.37.128.108 terminal=ssh res=failed'", "ausearch --raw | aulast --stdin root ssh 10.37.128.108 Mon Nov 22 07:33 - 07:33 (00:00) root ssh 10.37.128.108 Mon Nov 22 07:33 - 07:33 (00:00) root ssh 10.22.16.106 Mon Nov 22 07:40 - 07:40 (00:00) reboot system boot 4.18.0-348.6.el8 Mon Nov 22 07:33", "aureport --login -i Login Report ============================================ date time auid host term exe success event ============================================ 1. 11/16/2021 13:11:30 root 10.40.192.190 ssh /usr/sbin/sshd yes 6920 2. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6925 3. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6930 4. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6935 5. 11/16/2021 13:11:33 root 10.40.192.190 ssh /usr/sbin/sshd yes 6940 6. 11/16/2021 13:11:33 root 10.40.192.190 /dev/pts/0 /usr/sbin/sshd yes 6945", "yum install fapolicyd", "vi /etc/fapolicyd/fapolicyd.conf", "permissive = 1", "systemctl enable --now fapolicyd", "auditctl -w /etc/fapolicyd/ -p wa -k fapolicyd_changes service try-restart auditd", "ausearch -ts recent -m fanotify", "systemctl restart fapolicyd", "systemctl status fapolicyd ● fapolicyd.service - File Access Policy Daemon Loaded: loaded (/usr/lib/systemd/system/fapolicyd.service; enabled; preset: disabled) Active: active (running) since Tue 2024-10-08 05:53:50 EDT; 11s ago ... Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Loading trust data from rpmdb backend Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Loading trust data from file backend Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Starting to listen for events", "cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted", "cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted", "fapolicyd-cli --file add /tmp/ls --trust-file myapp", "fapolicyd-cli --update", "/tmp/ls ls", "cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted", "systemctl stop fapolicyd", "fapolicyd --debug-deny 2> fapolicy.output & [1] 51341", "/tmp/ls bash: /tmp/ls: Operation not permitted", "fg fapolicyd --debug 2> fapolicy.output ^C", "kill 51341", "cat fapolicy.output | grep 'deny_audit' rule=13 dec=deny_audit perm=execute auid=0 pid=6855 exe=/usr/bin/bash : path=/tmp/ls ftype=application/x-executable trust=0", "ls /etc/fapolicyd/rules.d/ 10-languages.rules 40-bad-elf.rules 72-shell.rules 20-dracut.rules 41-shared-obj.rules 90-deny-execute.rules 21-updaters.rules 42-trusted-elf.rules 95-allow-open.rules 30-patterns.rules 70-trusted-lang.rules cat /etc/fapolicyd/rules.d/90-deny-execute.rules Deny execution for anything untrusted deny_audit perm=execute all : all", "touch /etc/fapolicyd/rules.d/80-myapps.rules vi /etc/fapolicyd/rules.d/80-myapps.rules", "allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0", "allow perm=execute exe=/usr/bin/bash trust=1 : dir=/tmp/ trust=0", "sha256sum /tmp/ls 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836 ls", "allow perm=execute exe=/usr/bin/bash trust=1 : sha256hash= 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836", "fagenrules --check /usr/sbin/fagenrules: Rules have changed and should be updated fagenrules --load", "fapolicyd-cli --list 13. allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0 14. deny_audit perm=execute all : all", "systemctl start fapolicyd", "/tmp/ls ls", "vi /etc/fapolicyd/fapolicyd.conf", "integrity = sha256", "systemctl restart fapolicyd", "cp /bin/more /bin/more.bak", "cat /bin/less > /bin/more", "su example.user /bin/more /etc/redhat-release bash: /bin/more: Operation not permitted", "mv -f /bin/more.bak /bin/more", "rpm -i application .rpm", "fapolicyd-cli --update", "systemctl status fapolicyd", "fapolicyd-cli --check-config Daemon config is OK fapolicyd-cli --check-trustdb /etc/selinux/targeted/contexts/files/file_contexts miscompares: size sha256 /etc/selinux/targeted/policy/policy.31 miscompares: size sha256", "fapolicyd-cli --list 9. allow perm=execute all : trust=1 10. allow perm=open all : ftype=%languages trust=1 11. deny_audit perm=any all : ftype=%languages 12. allow perm=any all : ftype=text/x-shellscript 13. deny_audit perm=execute all : all", "systemctl stop fapolicyd", "fapolicyd --debug", "fapolicyd --debug 2> fapolicy.output", "fapolicyd --debug-deny", "fapolicyd --debug-deny --permissive", "systemctl stop fapolicyd fapolicyd-cli --delete-db", "fapolicyd-cli --dump-db", "rm -f /var/run/fapolicyd/fapolicyd.fifo", "--- - name: Configuring fapolicyd hosts: managed-node-01.example.com tasks: - name: Allow only executables installed from RPM database and specific files ansible.builtin.include_role: name: rhel-system-roles.fapolicyd vars: fapolicyd_setup_permissive: false fapolicyd_setup_integrity: sha256 fapolicyd_setup_trust: rpmdb,file fapolicyd_add_trusted_file: - <path_to_allowed_command> - <path_to_allowed_service>", "ansible-playbook ~/playbook.yml --syntax-check", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'su -c \"/bin/not_authorized_application \" <user_name> ' bash: line 1: /bin/not_authorized_application: Operation not permitted non-zero return code", "yum install usbguard", "usbguard generate-policy > /etc/usbguard/rules.conf", "systemctl enable --now usbguard", "systemctl status usbguard ● usbguard.service - USBGuard daemon Loaded: loaded (/usr/lib/systemd/system/usbguard.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-11-07 09:44:07 CET; 3min 16s ago Docs: man:usbguard-daemon(8) Main PID: 6122 (usbguard-daemon) Tasks: 3 (limit: 11493) Memory: 1.2M CGroup: /system.slice/usbguard.service └─6122 /usr/sbin/usbguard-daemon -f -s -c /etc/usbguard/usbguard-daemon.conf Nov 07 09:44:06 localhost.localdomain systemd[1]: Starting USBGuard daemon Nov 07 09:44:07 localhost.localdomain systemd[1]: Started USBGuard daemon.", "usbguard list-devices 4: allow id 1d6b:0002 serial \"0000:02:00.0\" name \"xHCI Host Controller\" hash", "usbguard list-devices 1: allow id 1d6b:0002 serial \"0000:00:06.7\" name \"EHCI Host Controller\" hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" parent-hash \"4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=\" via-port \"usb1\" with-interface 09:00:00 6: block id 1b1c:1ab1 serial \"000024937962\" name \"Voyager\" hash \"CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=\" parent-hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" via-port \"1-3\" with-interface 08:06:50", "usbguard allow-device <6>", "usbguard reject-device <6>", "usbguard block-device <6>", "semanage boolean -l | grep usbguard usbguard_daemon_write_conf (off , off) Allow usbguard to daemon write conf usbguard_daemon_write_rules (on , on) Allow usbguard to daemon write rules", "semanage boolean -m --on usbguard_daemon_write_rules", "usbguard list-devices 1: allow id 1d6b:0002 serial \"0000:00:06.7\" name \"EHCI Host Controller\" hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" parent-hash \"4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=\" via-port \"usb1\" with-interface 09:00:00 6 : block id 1b1c:1ab1 serial \"000024937962\" name \"Voyager\" hash \"CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=\" parent-hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" via-port \"1-3\" with-interface 08:06:50", "usbguard allow-device 6 -p", "usbguard reject-device 6 -p", "usbguard block-device 6 -p", "usbguard list-rules", "usbguard generate-policy --no-hashes > ./rules.conf", "vi ./rules.conf", "allow with-interface equals { 08:*:* }", "install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf", "systemctl restart usbguard", "usbguard list-rules 4: allow with-interface 08:*:*", "usbguard generate-policy --no-hashes > ./ policy.conf", "vi ./ policy.conf allow id 04f2:0833 serial \"\" name \"USB Keyboard\" via-port \"7-2\" with-interface { 03:01:01 03:00:00 } with-connect-type \"unknown\"", "grep \" USB Keyboard \" ./ policy.conf > ./ 10keyboards.conf", "install -m 0600 -o root -g root 10keyboards.conf /etc/usbguard/rules.d/ 10keyboards.conf", "grep -v \" USB Keyboard \" ./policy.conf > ./rules.conf", "install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf", "systemctl restart usbguard", "usbguard list-rules 15: allow id 04f2:0833 serial \"\" name \"USB Keyboard\" hash \"kxM/iddRe/WSCocgiuQlVs6Dn0VEza7KiHoDeTz0fyg=\" parent-hash \"2i6ZBJfTl5BakXF7Gba84/Cp1gslnNc1DM6vWQpie3s=\" via-port \"7-2\" with-interface { 03:01:01 03:00:00 } with-connect-type \"unknown\"", "cat /etc/usbguard/rules.conf /etc/usbguard/rules.d/*.conf", "vi /etc/usbguard/usbguard-daemon.conf", "IPCAllowGroups=wheel", "usbguard add-user joesec --devices ALL --policy modify,list --exceptions ALL", "systemctl restart usbguard", "vi /etc/usbguard/usbguard-daemon.conf", "AuditBackend=LinuxAudit", "systemctl restart usbguard", "ausearch -ts recent -m USER_DEVICE" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/security_hardening/index
Chapter 5. Technology Preview, Deprecated, and Removed Features
Chapter 5. Technology Preview, Deprecated, and Removed Features 5.1. Technology Preview Features Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope . The following table describes features available as Technology Previews in Red Hat Virtualization. Table 5.1. Technology Preview Features Technology Preview Feature Details IPv6 Static IPv6 assignment is fully supported in Red Hat Virtualization 4.3 and 4.4, but Dynamic IPv6 assignment is available as a Technology Preview. Note All hosts in the cluster must use IPv4 or IPv6 for RHV networks, not simultaneous IPv4 and IPv6, because dual stack is not supported. For details about IPv6 support, see IPv6 Networking Support in the Administration Guide. NoVNC console option Option for opening a virtual machine console in the browser using HTML5. Websocket proxy Allows users to connect to virtual machines through a noVNC console. VDSM hook for nested virtualization Allows a virtual machine to serve as a host. For details, see Enabling nested virtualization for all virtual machines in the Administration Guide. Import Debian and Ubuntu virtual machines from VMware and RHEL 5 Xen Allows virt-v2v to convert Debian and Ubuntu virtual machines from VMware or RHEL 5 Xen to KVM. Known Issues: virt-v2v cannot change the default kernel in the GRUB2 configuration. The kernel configured on the guest operating system is not changed during the conversion, even if a more optimal version is available. After converting a Debian or Ubuntu virtual machine from VMware to KVM, the name of the virtual machine's network interface may change, and will need to be configured manually NVDIMM host devices Support for attaching an emulated NVDIMM to virtual machines that are backed by NVDIMM on the host machine. For details, see NVDIMM Host Devices . Open vSwitch (OVS) cluster type support Adds Open vSwitch networking capabilities. Shared and local storage in the same data center Allows the creation of single-brick Gluster volumes to enable local storage to be used as a storage domain in shared data centers. Cinderlib integration Leverages CinderLib library to use Cinder-supported storage drivers in Red Hat Virtualization without a Full Cinder-OpenStack deployment. Adds support for Ceph storage along with Fibre Channel and iSCSI storage. The Cinder volume has multipath support on the Red Hat Virtualization Host. SSO with OpenID Connect Adds support for external OpenID Connect authentication using Keycloak in both the user interface and with the REST API. oVirt Engine Backup Adds support to back up and restore Red Hat Virtualization Manager with the Ansible ovirt-engine-backup role. Failover vNIC profile Allows users to migrate a virtual machine connected via SR-IOV with minimal downtime by using a failover network that is activated during migration. Dedicated CPU pinning policy Guest vCPUs will be exclusively pinned to a set of host pCPUs (similar to static CPU pinning). The set of pCPUs will be chosen to match the required guest CPU topology. If the host has an SMT architecture, thread siblings are preferred.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/tech_preview_and_deprecated_features
14.8. Additional Resources
14.8. Additional Resources dhcpd(8) man page - Describes how the DHCP daemon works. dhcpd.conf(5) man page - Explains how to configure the DHCP configuration file; includes some examples. dhcpd.leases(5) man page - Describes a persistent database of leases. dhcp-options(5) man page - Explains the syntax for declaring DHCP options in dhcpd.conf ; includes some examples. dhcrelay(8) man page - Explains the DHCP Relay Agent and its configuration options. /usr/share/doc/dhcp- version / - Contains example files, README files, and release notes for current versions of the DHCP service.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-dhcp-additional-resources
14.5. The (Non-transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss EAP)
14.5. The (Non-transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss EAP) The Carmart (non-transactional) quickstart is supported for JBoss Data Grid's Remote Client-Server Mode with the JBoss Enterprise Application Platform container. Report a bug 14.5.1. Build and Deploy the CarMart Quickstart in Remote Client-Server Mode This quickstart accesses Red Hat JBoss Data Grid via Hot Rod. This feature is not available for the Transactional CarMart quickstart. Important This quickstart deploys to JBoss Enterprise Application Platform. The application cannot be deployed to JBoss Data Grid because it does not support application deployment. Prerequisites Prerequisites for this procedure are as follows: Obtain the most recent supported JBoss Data Grid Remote Client-Server Mode distribution files from Red Hat . Ensure that the JBoss Data Grid and JBoss Enterprise Application Platform Maven repositories are installed and configured. For details, see Chapter 3, Install and Use the Maven Repositories Select a JBoss server to use (JBoss Enterprise Application Platform 6 or later). Navigate to the root of the JBoss server directory in a terminal window and enter the following command: For Linux users: For Windows users: Procedure 14.7. Build and Deploy the CarMart Quickstart in Remote Client-Server Mode Configure the Standalone File Add the following configuration to the standalone.xml file located in the USDJDG_HOME/standalone/configuration/ directory. Add the following configuration within the infinispan subsystem tags: Note If the carcache element already exists in your configuration, replace it with the provided configuration. Start the JBoss Data Grid Server Run the following script to start the JBoss Data Grid Server: Start the JBoss Server Run the following script to start the JBoss server instance where your application will deploy: Optional: Specify the Host and Port Address The application uses the values in the jboss-datagrid-{VERSION}-quickstarts/carmart/src/main/resources/META-INF/ datagrid.properties file to locate the JBoss Data Grid server. If your JBoss Data Grid server is not running using the default host and port values, edit the file and insert the correct host and port values, as follows: Navigate to the Root Directory Open a command line and navigate to the root directory of this quickstart. Build and Deploy the Application Use the following command to build and deploy your application using Maven: Report a bug 14.5.2. View the CarMart Quickstart in Remote Client-Server Mode The following procedure outlines how to view the CarMart quickstart in Red Hat JBoss Data Grid's Remote Client-Server Mode: Prerequisite The CarMart quickstart must be built and deployed be viewed. Procedure 14.8. View the CarMart Quickstart in Remote Client-Server Mode Visit the following link in a browser window to view the application: Report a bug 14.5.3. Remove the CarMart Quickstart in Remote Client-Server Mode The following procedure provides directions to remove an already deployed application in Red Hat JBoss Data Grid's Remote Client-Server mode. Procedure 14.9. Remove an Application in Remote Client-Server Mode To remove an application, use the following command from the root directory of this quickstart: Report a bug
[ "USDJBOSS_HOME/bin/standalone.sh", "USDJBOSS_HOME\\bin\\standalone.bat", "<local-cache name=\"carcache\" start=\"EAGER\" batching=\"false\" statistics=\"true\"> <eviction strategy=\"LIRS\" max-entries=\"4\"/> </local-cache>", "USDJDG_HOME/bin/ standalone.sh -Djboss.socket.binding.port-offset=100", "USDJBOSS_HOME/bin/ standalone.sh", "datagrid.host=localhost datagrid.hotrod.port=11322", "mvn clean package jboss-as:deploy -Premote-jbossas", "http://localhost:8080/jboss-carmart", "mvn jboss-as:undeploy -Premote-jbossas" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-the_non-transactional_carmart_quickstart_in_remote_client-server_mode_jboss_eap
Chapter 25. Pricing
Chapter 25. Pricing This section describes different ways you can charge your developers for using your API. Setup fee is a one-time charge applied upon subscription to the service, it is not charged when switching to another plan. It appears in the invoice/credit card only on the first month of a subscription. Can be configured on application plans, service plans, and account plans. Cost per month is a recurring cost charged monthly. It is prorated if the subscription occurred in the middle of the month. Sometimes it is referred to as fixed fee . Can be configured on application plans, service plans, and account plans. Variable costs are the costs derived from the pricing rules applied to each method/metric configured in the application plan. They are based on the usage of your API and therefore cannot be known in advance, only when the billing period has concluded. Only available on application plans. Example 25.1. Pricing rules Pricing rules define the cost of each API request. Multiple pricing rules on the same metric divide up the ranges of when a pricing rule applies. Pricing rules are based on calendar month, and the counter is reset at 00:00 UTC of the 1st day of each month. Example 1 Example 2 Example 3 Note Pricing rules are defined for metrics and methods. The actual API requests are mapped to these metrics and methods through the Mapping Rules. 25.2. Setting pricing rules Go to [Your_API_service] > Applications > Application Plans . Select an existing application plan or create a new one. In the section Metrics, Methods, Limits & Pricing Rules click Pricing (x) to open the pricing section. Click new pricing rule. Set the values From, To and Cost per Unit and click Create pricing rule . Repeat the last two steps to create all the necessary pricing rules ranges. Leave the To field empty to set the rule to to infinity . The maximum number of decimals is set to 4 for the cost of metric, if a number is added with more decimals, the value is rounded to a number with 4 decimal places. 25.3. Update existing pricing rules Click on edit. Make the necessary adjustments to the From, To and Cost per Unit fields. Click Update pricing rule.
[ "If you have a plan with monthly cost of USD10, but want to charge your developer a USD5 setup fee. The initial charge would be for USD15, while all subsequent charges would be for USD10.", "Until 100 calls per month (from 1 to 100) each API call can be charged at USD0.04, and starting from the call 101 (from 101 to infinity) the calls are charged at USD0.10.", "The first 1000 calls are not charged (cost USD0), because they are included in the plan which has a monthly fixed cost. Starting from call 1001, each call is charged at USD0.50.", "Calls from 1 to 100 are charged at USD0.30, 100 to 500 - at USD0.40, and 500 and further - at USD0.50." ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/pricing
function::proc_mem_size_pid
function::proc_mem_size_pid Name function::proc_mem_size_pid - Total program virtual memory size in pages Synopsis Arguments pid The pid of process to examine Description Returns the total virtual memory size in pages of the given process, or zero when that process doesn't exist or the number of pages couldn't be retrieved.
[ "function proc_mem_size_pid:long(pid:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-proc-mem-size-pid
Chapter 10. MachineSet [machine.openshift.io/v1beta1]
Chapter 10. MachineSet [machine.openshift.io/v1beta1] Description MachineSet ensures that a specified number of machines replicas are running at any given time. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineSetSpec defines the desired state of MachineSet status object MachineSetStatus defines the observed state of MachineSet 10.1.1. .spec Description MachineSetSpec defines the desired state of MachineSet Type object Property Type Description deletePolicy string DeletePolicy defines the policy used to identify nodes to delete when downscaling. Defaults to "Random". Valid values are "Random, "Newest", "Oldest" minReadySeconds integer MinReadySeconds is the minimum number of seconds for which a newly created machine should be ready. Defaults to 0 (machine will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. selector object Selector is a label query over machines that should match the replica count. Label keys and values that must match in order to be controlled by this MachineSet. It must match the machine template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template object Template is the object that describes the machine that will be created if insufficient replicas are detected. 10.1.2. .spec.selector Description Selector is a label query over machines that should match the replica count. Label keys and values that must match in order to be controlled by this MachineSet. It must match the machine template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 10.1.3. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 10.1.4. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 10.1.5. .spec.template Description Template is the object that describes the machine that will be created if insufficient replicas are detected. Type object Property Type Description metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the machine. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 10.1.6. .spec.template.metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 10.1.7. .spec.template.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 10.1.8. .spec.template.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 10.1.9. .spec.template.spec Description Specification of the desired behavior of the machine. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 10.1.10. .spec.template.spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 10.1.11. .spec.template.spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 10.1.12. .spec.template.spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 10.1.13. .spec.template.spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 10.1.14. .spec.template.spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 10.1.15. .spec.template.spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 10.1.16. .spec.template.spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 10.1.17. .spec.template.spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 10.1.18. .spec.template.spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 10.1.19. .spec.template.spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 10.1.20. .spec.template.spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 10.1.21. .status Description MachineSetStatus defines the observed state of MachineSet Type object Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this MachineSet. conditions array Conditions defines the current state of the MachineSet conditions[] object Condition defines an observation of a Machine API resource operational state. errorMessage string errorReason string In the event that there is a terminal problem reconciling the replicas, both ErrorReason and ErrorMessage will be set. ErrorReason will be populated with a succinct value suitable for machine interpretation, while ErrorMessage will contain a more verbose string suitable for logging and human consumption. These fields should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the MachineTemplate's spec or the configuration of the machine controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the machine controller, or the responsible machine controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the MachineSet object and/or logged in the controller's output. fullyLabeledReplicas integer The number of replicas that have labels matching the labels of the machine template of the MachineSet. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed MachineSet. readyReplicas integer The number of ready replicas for this MachineSet. A machine is considered ready when the node has been created and is "Ready". replicas integer Replicas is the most recently observed number of replicas. 10.1.22. .status.conditions Description Conditions defines the current state of the MachineSet Type array 10.1.23. .status.conditions[] Description Condition defines an observation of a Machine API resource operational state. Type object Required type Property Type Description lastTransitionTime string Last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string A human readable message indicating details about the transition. This field may be empty. reason string The reason for the condition's last transition in CamelCase. The specific API may choose whether or not this field is considered a guaranteed API. This field may not be empty. severity string Severity provides an explicit classification of Reason code, so the users or machines can immediately understand the current situation and act accordingly. The Severity field MUST be set only when Status=False. status string Status of the condition, one of True, False, Unknown. type string Type of condition in CamelCase or in foo.example.com/CamelCase. Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. 10.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1beta1/machinesets GET : list objects of kind MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets DELETE : delete collection of MachineSet GET : list objects of kind MachineSet POST : create a MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name} DELETE : delete a MachineSet GET : read the specified MachineSet PATCH : partially update the specified MachineSet PUT : replace the specified MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/scale GET : read scale of the specified MachineSet PATCH : partially update scale of the specified MachineSet PUT : replace scale of the specified MachineSet /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/status GET : read status of the specified MachineSet PATCH : partially update status of the specified MachineSet PUT : replace status of the specified MachineSet 10.2.1. /apis/machine.openshift.io/v1beta1/machinesets HTTP method GET Description list objects of kind MachineSet Table 10.1. HTTP responses HTTP code Reponse body 200 - OK MachineSetList schema 401 - Unauthorized Empty 10.2.2. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets HTTP method DELETE Description delete collection of MachineSet Table 10.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineSet Table 10.3. HTTP responses HTTP code Reponse body 200 - OK MachineSetList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineSet Table 10.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.5. Body parameters Parameter Type Description body MachineSet schema Table 10.6. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 202 - Accepted MachineSet schema 401 - Unauthorized Empty 10.2.3. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name} Table 10.7. Global path parameters Parameter Type Description name string name of the MachineSet HTTP method DELETE Description delete a MachineSet Table 10.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineSet Table 10.10. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineSet Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.12. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineSet Table 10.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.14. Body parameters Parameter Type Description body MachineSet schema Table 10.15. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 401 - Unauthorized Empty 10.2.4. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/scale Table 10.16. Global path parameters Parameter Type Description name string name of the MachineSet HTTP method GET Description read scale of the specified MachineSet Table 10.17. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified MachineSet Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.19. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified MachineSet Table 10.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.21. Body parameters Parameter Type Description body Scale schema Table 10.22. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 10.2.5. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machinesets/{name}/status Table 10.23. Global path parameters Parameter Type Description name string name of the MachineSet HTTP method GET Description read status of the specified MachineSet Table 10.24. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineSet Table 10.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.26. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineSet Table 10.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.28. Body parameters Parameter Type Description body MachineSet schema Table 10.29. HTTP responses HTTP code Reponse body 200 - OK MachineSet schema 201 - Created MachineSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_apis/machineset-machine-openshift-io-v1beta1
Chapter 14. Support
Chapter 14. Support 14.1. Support overview You can collect data about your environment, monitor the health of your cluster and virtual machines (VMs), and troubleshoot OpenShift Virtualization resources with the following tools. 14.1.1. Web console The OpenShift Container Platform web console displays resource usage, alerts, events, and trends for your cluster and for OpenShift Virtualization components and resources. Table 14.1. Web console pages for monitoring and troubleshooting Page Description Overview page Cluster details, status, alerts, inventory, and resource usage Virtualization Overview tab OpenShift Virtualization resources, usage, alerts, and status Virtualization Top consumers tab Top consumers of CPU, memory, and storage Virtualization Migrations tab Progress of live migrations VirtualMachines VirtualMachine VirtualMachine details Metrics tab VM resource usage, storage, network, and migration VirtualMachines VirtualMachine VirtualMachine details Events tab List of VM events VirtualMachines VirtualMachine VirtualMachine details Diagnostics tab VM status conditions and volume snapshot status 14.1.2. Collecting data for Red Hat Support When you submit a support case to Red Hat Support, it is helpful to provide debugging information. You can gather debugging information by performing the following steps: Collecting data about your environment Configure Prometheus and Alertmanager and collect must-gather data for OpenShift Container Platform and OpenShift Virtualization. Collecting data about VMs Collect must-gather data and memory dumps from VMs. must-gather tool for OpenShift Virtualization Configure and use the must-gather tool. 14.1.3. Monitoring You can monitor the health of your cluster and VMs. For details about monitoring tools, see the Monitoring overview . 14.1.4. Troubleshooting Troubleshoot OpenShift Virtualization components and VMs and resolve issues that trigger alerts in the web console. Events View important life-cycle information for VMs, namespaces, and resources. Logs View and configure logs for OpenShift Virtualization components and VMs. Runbooks Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the web console. Troubleshooting data volumes Troubleshoot data volumes by analyzing conditions and events. 14.2. Collecting data for Red Hat Support When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools: must-gather tool The must-gather tool collects diagnostic information, including resource definitions and service logs. Prometheus Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Alertmanager The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems. For information about the OpenShift Container Platform monitoring stack, see About OpenShift Container Platform monitoring . 14.2.1. Collecting data about your environment Collecting data about your environment minimizes the time required to analyze and determine the root cause. Prerequisites Set the retention time for Prometheus metrics data to a minimum of seven days. Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster. Record the exact number of affected nodes and virtual machines. Procedure Collect must-gather data for the cluster . Collect must-gather data for Red Hat OpenShift Data Foundation , if necessary. Collect must-gather data for OpenShift Virtualization . Collect Prometheus metrics for the cluster . 14.2.2. Collecting data about virtual machines Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause. Prerequisites Linux VMs: Install the latest QEMU guest agent . Windows VMs: Record the Windows patch update details. Install the latest VirtIO drivers . Install the latest QEMU guest agent . If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP by using the web console or the command line to determine whether there is a problem with the connection software. Procedure Collect must-gather data for the VMs using the /usr/bin/gather script. Collect screenshots of VMs that have crashed before you restart them. Collect memory dumps from VMs before remediation attempts. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network. 14.2.3. Using the must-gather tool for OpenShift Virtualization You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image. The default data collection includes information about the following resources: OpenShift Virtualization Operator namespaces, including child objects OpenShift Virtualization custom resource definitions Namespaces that contain virtual machines Basic virtual machine definitions Procedure Run the following command to collect data about OpenShift Virtualization: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 \ -- /usr/bin/gather 14.2.3.1. must-gather tool options You can specify a combination of scripts and environment variables for the following options: Collecting detailed virtual machine (VM) information from a namespace Collecting detailed information about specified VMs Collecting image, image-stream, and image-stream-tags information Limiting the maximum number of parallel processes used by the must-gather tool 14.2.3.1.1. Parameters Environment variables You can specify environment variables for a compatible script. NS=<namespace_name> Collect virtual machine information, including virt-launcher pod details, from the namespace that you specify. The VirtualMachine and VirtualMachineInstance CR data is collected for all namespaces. VM=<vm_name> Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the NS environment variable. PROS=<number_of_processes> Modify the maximum number of parallel processes that the must-gather tool uses. The default value is 5 . Important Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended. Scripts Each script is compatible only with certain environment variable combinations. /usr/bin/gather Use the default must-gather script, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with the PROS variable. /usr/bin/gather --vms_details Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to OpenShift Virtualization resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the must-gather tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the VM variable. /usr/bin/gather --images Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the PROS variable. 14.2.3.1.2. Usage and examples Environment variables are optional. You can run a script by itself or with one or more compatible environment variables. Table 14.2. Compatible parameters Script Compatible environment variable /usr/bin/gather PROS=<number_of_processes> /usr/bin/gather --vms_details For a namespace: NS=<namespace_name> For a VM: VM=<vm_name> NS=<namespace_name> PROS=<number_of_processes> /usr/bin/gather --images PROS=<number_of_processes> Syntax USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 \ -- <environment_variable_1> <environment_variable_2> <script_name> Default data collection parallel processes By default, five processes run in parallel. USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 \ -- PROS=5 /usr/bin/gather 1 1 You can modify the number of parallel processes by changing the default. Detailed VM information The following command collects detailed VM information for the my-vm VM in the mynamespace namespace: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 \ -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1 1 The NS environment variable is mandatory if you use the VM environment variable. Image, image-stream, and image-stream-tags information The following command collects image, image-stream, and image-stream-tags information from the cluster: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 \ /usr/bin/gather --images 14.3. Monitoring 14.3.1. Monitoring overview You can monitor the health of your cluster and virtual machines (VMs) with the following tools: OpenShift Container Platform cluster checkup framework Run automated tests on your cluster with the OpenShift Container Platform cluster checkup framework to check the following conditions: Network connectivity and latency between two VMs attached to a secondary network interface VM running a Data Plane Development Kit (DPDK) workload with zero packet loss Important The OpenShift Container Platform cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prometheus queries for virtual resources Query vCPU, network, storage, and guest memory swapping usage and live migration progress. VM custom metrics Configure the node-exporter service to expose internal VM metrics and processes. xref:[VM health checks] Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs. Important The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 14.3.2. OpenShift Container Platform cluster checkup framework OpenShift Virtualization includes predefined checkups that can be used for cluster maintenance and troubleshooting. Important The OpenShift Container Platform cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 14.3.2.1. About the OpenShift Container Platform cluster checkup framework A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup. By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly. Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times. Important You must always: Verify that the checkup image is from a trustworthy source before applying it. Review the checkup permissions before creating the Role and RoleBinding objects. 14.3.2.2. Virtual machine latency checkup You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility. You run a latency checkup by performing the following steps: Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup. Create a config map to provide the input to run the checkup and to store the results. Create a job to run the checkup. Review the results in the config map. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job. When you are finished, delete the latency checkup resources. Prerequisites You installed the OpenShift CLI ( oc ). The cluster has at least two worker nodes. The Multus Container Network Interface (CNI) plugin is installed on the cluster. You configured a network attachment definition for a namespace. Procedure Create a ServiceAccount , Role , and RoleBinding manifest for the latency checkup: Example 14.1. Example role manifest file --- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: ["kubevirt.io"] resources: ["virtualmachineinstances"] verbs: ["get", "create", "delete"] - apiGroups: ["subresources.kubevirt.io"] resources: ["virtualmachineinstances/console"] verbs: ["get"] - apiGroups: ["k8s.cni.cncf.io"] resources: ["network-attachment-definitions"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io Apply the ServiceAccount , Role , and RoleBinding manifest: USD oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1 1 <target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides. Create a ConfigMap manifest that contains the input parameters for the checkup: Example input config map apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" 1 spec.param.maxDesiredLatencyMilliseconds: "10" 2 spec.param.sampleDurationSeconds: "5" 3 spec.param.sourceNode: "worker1" 4 spec.param.targetNode: "worker2" 5 1 The name of the NetworkAttachmentDefinition object. 2 Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails. 3 Optional: The duration of the latency check, in seconds. 4 Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.targetNode field cannot be empty. 5 Optional: When specified, latency is measured from the source node to this node. Apply the config map manifest in the target namespace: USD oc apply -n <target_namespace> -f <latency_config_map>.yaml Create a Job manifest to run the checkup: Example job manifest apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.13.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid Apply the Job manifest: USD oc apply -n <target_namespace> -f <latency_job>.yaml Wait for the job to complete: USD oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.maxDesiredLatencyMilliseconds attribute, the checkup fails and returns an error. USD oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml Example output config map (success) apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2" status.succeeded: "true" status.failureReason: "" status.completionTimestamp: "2022-01-01T09:00:00Z" status.startTimestamp: "2022-01-01T09:00:07Z" status.result.avgLatencyNanoSec: "177000" status.result.maxLatencyNanoSec: "244000" 1 status.result.measurementDurationSec: "5" status.result.minLatencyNanoSec: "135000" status.result.sourceNode: "worker1" status.result.targetNode: "worker2" 1 The maximum measured latency in nanoseconds. Optional: To view the detailed job log in case of checkup failure, use the following command: USD oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace> Delete the job and config map that you previously created by running the following commands: USD oc delete job -n <target_namespace> kubevirt-vm-latency-checkup USD oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config Optional: If you do not plan to run another checkup, delete the roles manifest: USD oc delete -f <latency_sa_roles_rolebinding>.yaml 14.3.2.3. DPDK checkup Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator pod and a VM running a test DPDK application. You run a DPDK checkup by performing the following steps: Create a service account, role, and role bindings for the DPDK checkup and a service account for the traffic generator pod. Create a security context constraints resource for the traffic generator pod. Create a config map to provide the input to run the checkup and to store the results. Create a job to run the checkup. Review the results in the config map. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job. When you are finished, delete the DPDK checkup resources. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have configured the compute nodes to run DPDK applications on VMs with zero packet loss. Important The traffic generator pod created by the checkup has elevated privileges: It runs as root. It has a bind mount to the node's file system. The container image of the traffic generator is pulled from the upstream Project Quay container registry. Procedure Create a ServiceAccount , Role , and RoleBinding manifest for the DPDK checkup and the traffic generator pod: Example 14.2. Example service account, role, and rolebinding manifest file --- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "get", "update" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/console" ] verbs: [ "get" ] - apiGroups: [ "" ] resources: [ "pods" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "" ] resources: [ "pods/exec" ] verbs: [ "create" ] - apiGroups: [ "k8s.cni.cncf.io" ] resources: [ "network-attachment-definitions" ] verbs: [ "get" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker --- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-traffic-gen-sa Apply the ServiceAccount , Role , and RoleBinding manifest: USD oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml Create a SecurityContextConstraints manifest for the traffic generator pod: Example security context constraints manifest apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: dpdk-checkup-traffic-gen allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - IPC_LOCK - NET_ADMIN - NET_RAW - SYS_RESOURCE defaultAddCapabilities: null fsGroup: type: RunAsAny groups: [] readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - runtime/default - unconfined supplementalGroups: type: RunAsAny users: - system:serviceaccount:dpdk-checkup-ns:dpdk-checkup-traffic-gen-sa Apply the SecurityContextConstraints manifest: USD oc apply -f <dpdk_scc>.yaml Create a ConfigMap manifest that contains the input parameters for the checkup: Example input config map apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGeneratorRuntimeClassName: <runtimeclass_name> 2 spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1" 3 spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1" 4 1 The name of the NetworkAttachmentDefinition object. 2 The RuntimeClass resource that the traffic generator pod uses. 3 The container image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry. 4 The container disk image for the VM. In this example, the image is pulled from the upstream Project Quay Container Registry. Apply the ConfigMap manifest in the target namespace: USD oc apply -n <target_namespace> -f <dpdk_config_map>.yaml Create a Job manifest to run the checkup: Example job manifest apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.13.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid Apply the Job manifest: USD oc apply -n <target_namespace> -f <dpdk_job>.yaml Wait for the job to complete: USD oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m Review the results of the checkup by running the following command: USD oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml Example output config map (success) apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 1h2m spec.param.NetworkAttachmentDefinitionName: "mlx-dpdk-network-1" spec.param.trafficGeneratorRuntimeClassName: performance-performance-zeus10 spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1" spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1" status.succeeded: true status.failureReason: " " status.startTimestamp: 2022-12-21T09:33:06+00:00 status.completionTimestamp: 2022-12-21T11:33:06+00:00 status.result.actualTrafficGeneratorTargetNode: worker-dpdk1 status.result.actualDPDKVMTargetNode: worker-dpdk2 status.result.dropRate: 0 Delete the job and config map that you previously created by running the following commands: USD oc delete job -n <target_namespace> dpdk-checkup USD oc delete config-map -n <target_namespace> dpdk-checkup-config Optional: If you do not plan to run another checkup, delete the ServiceAccount , Role , and RoleBinding manifest: USD oc delete -f <dpdk_sa_roles_rolebinding>.yaml 14.3.2.3.1. DPDK checkup config map parameters The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup: Table 14.3. DPDK checkup config map parameters Parameter Description Is Mandatory spec.timeout The time, in minutes, before the checkup fails. True spec.param.networkAttachmentDefinitionName The name of the NetworkAttachmentDefinition object of the SR-IOV NICs connected. True spec.param.trafficGeneratorRuntimeClassName The RuntimeClass resource that the traffic generator pod uses. True spec.param.trafficGeneratorImage The container image for the traffic generator. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:main . False spec.param.trafficGeneratorNodeSelector The node on which the traffic generator pod is to be scheduled. The node should be configured to allow DPDK traffic. False spec.param.trafficGeneratorPacketsPerSecond The number of packets per second, in kilo (k) or million(m). The default value is 14m. False spec.param.trafficGeneratorEastMacAddress The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format 50:xx:xx:xx:xx:01 . False spec.param.trafficGeneratorWestMacAddress The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format 50:xx:xx:xx:xx:02 . False spec.param.vmContainerDiskImage The container disk image for the VM. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-vm:main . False spec.param.DPDKLabelSelector The label of the node on which the VM runs. The node should be configured to allow DPDK traffic. False spec.param.DPDKEastMacAddress The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format 60:xx:xx:xx:xx:01 . False spec.param.DPDKWestMacAddress The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format 60:xx:xx:xx:xx:02 . False spec.param.testDuration The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. False spec.param.portBandwidthGB The maximum bandwidth of the SR-IOV NIC. The default value is 10GB. False spec.param.verbose When set to true , it increases the verbosity of the checkup log. The default value is false . False 14.3.2.3.2. Building a container disk image for RHEL virtual machines You can build a custom Red Hat Enterprise Linux (RHEL) 8 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map. To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 8 VM that can be used to build custom RHEL images. Prerequisites The image builder VM must run RHEL 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the /var directory. You have installed the image builder tool and its CLI ( composer-cli ) on the VM. You have installed the virt-customize tool: # dnf install libguestfs-tools You have installed the Podman CLI tool ( podman ). Procedure Verify that you can build a RHEL 8.7 image: # composer-cli distros list Note To run the composer-cli commands as non-root, add your user to the weldr or root groups: # usermod -a -G weldr user USD newgrp weldr Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time: USD cat << EOF > dpdk-vm.toml name = "dpdk_image" description = "Image to use with the DPDK checkup" version = "0.0.1" distro = "rhel-87" [[packages]] name = "dpdk" [[packages]] name = "dpdk-tools" [[packages]] name = "driverctl" [[packages]] name = "tuned-profiles-cpu-partitioning" [customizations.kernel] append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7" [customizations.services] disabled = ["NetworkManager-wait-online", "sshd"] EOF Push the blueprint file to the image builder tool by running the following command: # composer-cli blueprints push dpdk-vm.toml Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process. # composer-cli compose start dpdk_image qcow2 Wait for the compose process to complete. The compose status must show FINISHED before you can continue to the step. # composer-cli compose status Enter the following command to download the qcow2 image file by specifying its UUID: # composer-cli compose image <UUID> Create the customization scripts by running the following commands: USD cat <<EOF >customize-vm echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf tuned-adm profile cpu-partitioning echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf EOF USD cat <<EOF >first-boot driverctl set-override 0000:06:00.0 vfio-pci driverctl set-override 0000:07:00.0 vfio-pci mkdir /mnt/huge mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB EOF Use the virt-customize tool to customize the image generated by the image builder tool: USD virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabel To create a Dockerfile that contains all the commands to build the container disk image, enter the following command: USD cat << EOF > Dockerfile FROM scratch COPY <uuid>-disk.qcow2 /disk/ EOF where: <uuid>-disk.qcow2 Specifies the name of the custom image in qcow2 format. Build and tag the container by running the following command: USD podman build . -t dpdk-rhel:latest Push the container disk image to a registry that is accessible from your cluster by running the following command: USD podman push dpdk-rhel:latest Provide a link to the container disk image in the spec.param.vmContainerDiskImage attribute in the DPDK checkup config map. 14.3.2.4. Additional resources Attaching a virtual machine to multiple networks Using a virtual function in DPDK mode with an Intel NIC Using SR-IOV and the Node Tuning Operator to achieve a DPDK line rate Installing image builder How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription Manager 14.3.3. Prometheus queries for virtual resources OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status. Use the OpenShift Container Platform monitoring dashboard to query virtualization metrics. 14.3.3.1. Prerequisites To use the vCPU metric, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see Adding kernel arguments to nodes . For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests. 14.3.3.2. Querying metrics The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a cluster administrator, you can query metrics for all core OpenShift Container Platform and user-defined projects. As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. 14.3.3.2.1. Querying metrics for all projects as a cluster administrator As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective in the OpenShift Container Platform web console, select Observe Metrics . To add one or more queries, do any of the following: Option Description Create a custom query. Add your Prometheus Query Language (PromQL) query to the Expression field. As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item. Add multiple queries. Select Add query . Duplicate an existing query. Select the Options menu to the query, then choose Duplicate query . Disable a query from being run. Select the Options menu to the query and choose Disable query . To run queries that you created, select Run queries . The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. Note By default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following: Option Description Hide all metrics from a query. Click the Options menu for the query and click Hide all series . Hide a specific metric. Go to the query table and click the colored square near the metric name. Zoom into the plot and change the time range. Either: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. Reset the time range. Select Reset zoom . Display outputs for all queries at a specific point in time. Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box. Hide the plot. Select Hide graph . 14.3.3.2.2. Querying metrics for user-defined projects as a developer You can access metrics for a user-defined project as a developer or as a user with view permissions for the project. In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure From the Developer perspective in the OpenShift Container Platform web console, select Observe Metrics . Select the project that you want to view metrics for in the Project: list. Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL . The metrics from the queries are visualized on the plot. Note In the Developer perspective, you can only run one query at a time. Explore the visualized metrics by doing any of the following: Option Description Zoom into the plot and change the time range. Either: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. Reset the time range. Select Reset zoom . Display outputs for all queries at a specific point in time. Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box. 14.3.3.3. Virtualization metrics The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. Note The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output. 14.3.3.3.1. vCPU metrics The following query can identify virtual machines that are waiting for Input/Output (I/O): kubevirt_vmi_vcpu_wait_seconds Returns the wait time (in seconds) for a virtual machine's vCPU. Type: Counter. A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O. Note To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. Example vCPU wait time query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1 1 This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period. 14.3.3.3.2. Network metrics The following queries can identify virtual machines that are saturating the network: kubevirt_vmi_network_receive_bytes_total Returns the total amount of traffic received (in bytes) on the virtual machine's network. Type: Counter. kubevirt_vmi_network_transmit_bytes_total Returns the total amount of traffic transmitted (in bytes) on the virtual machine's network. Type: Counter. Example network traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period. 14.3.3.3.3. Storage metrics 14.3.3.3.3.1. Storage-related traffic The following queries can identify VMs that are writing large amounts of data: kubevirt_vmi_storage_read_traffic_bytes_total Returns the total amount (in bytes) of the virtual machine's storage-related traffic. Type: Counter. kubevirt_vmi_storage_write_traffic_bytes_total Returns the total amount of storage writes (in bytes) of the virtual machine's storage-related traffic. Type: Counter. Example storage-related traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period. 14.3.3.3.3.2. Storage snapshot data kubevirt_vmsnapshot_disks_restored_from_source_total Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge. kubevirt_vmsnapshot_disks_restored_from_source_bytes Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge. Examples of storage snapshot data queries kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"} 1 1 This query returns the total number of virtual machine disks restored from the source virtual machine. kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} 1 1 This query returns the amount of space in bytes restored from the source virtual machine. 14.3.3.3.3.3. I/O performance The following queries can determine the I/O performance of storage devices: kubevirt_vmi_storage_iops_read_total Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter. kubevirt_vmi_storage_iops_write_total Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter. Example I/O performance query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period. 14.3.3.3.4. Guest memory swapping metrics The following queries can identify which swap-enabled guests are performing the most memory swapping: kubevirt_vmi_memory_swap_in_traffic_bytes_total Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge. kubevirt_vmi_memory_swap_out_traffic_bytes_total Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge. Example memory swapping query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period. Note Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue. 14.3.3.3.5. Live migration metrics The following metrics can be queried to show live migration status: kubevirt_migrate_vmi_data_processed_bytes The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge. kubevirt_migrate_vmi_data_remaining_bytes The amount of guest operating system data that remains to be migrated. Type: Gauge. kubevirt_migrate_vmi_dirty_memory_rate_bytes The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge. kubevirt_migrate_vmi_pending_count The number of pending migrations. Type: Gauge. kubevirt_migrate_vmi_scheduling_count The number of scheduling migrations. Type: Gauge. kubevirt_migrate_vmi_running_count The number of running migrations. Type: Gauge. kubevirt_migrate_vmi_succeeded The number of successfully completed migrations. Type: Gauge. kubevirt_migrate_vmi_failed The number of failed migrations. Type: Gauge. 14.3.3.4. Additional resources Monitoring overview Querying Prometheus Prometheus query examples 14.3.4. Exposing custom metrics for virtual machines OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics. In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service. 14.3.4.1. Configuring the node exporter service The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines. Prerequisites Install the OpenShift Container Platform CLI oc . Log in to the cluster as a user with cluster-admin privileges. Create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project. Configure the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project by setting enableUserWorkload to true . Procedure Create the Service YAML file. In the following example, the file is called node-exporter-service.yaml . kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7 1 The node-exporter service that exposes the metrics from the virtual machines. 2 The namespace where the service is created. 3 The label for the service. The ServiceMonitor uses this label to match this service. 4 The name given to the port that exposes metrics on port 9100 for the ClusterIP service. 5 The target port used by node-exporter-service to listen for requests. 6 The TCP port number of the virtual machine that is configured with the monitor label. 7 The label used to match the virtual machine's pods. In this example, any virtual machine's pod with the label monitor and a value of metrics will be matched. Create the node-exporter service: USD oc create -f node-exporter-service.yaml 14.3.4.2. Configuring a virtual machine with the node exporter service Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots. Prerequisites The pods for the component are running in the openshift-user-workload-monitoring project. Grant the monitoring-edit role to users who need to monitor this user-defined project. Procedure Log on to the virtual machine. Download the node-exporter file on to the virtual machine by using the directory path that applies to the version of node-exporter file. USD wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz Extract the executable and place it in the /usr/bin directory. USD sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter" Create a node_exporter.service file in this directory path: /etc/systemd/system . This systemd service file runs the node-exporter service when the virtual machine reboots. [Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target Enable and start the systemd service. USD sudo systemctl enable node_exporter.service USD sudo systemctl start node_exporter.service Verification Verify that the node-exporter agent is reporting metrics from the virtual machine. USD curl http://localhost:9100/metrics Example output go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05 14.3.4.3. Creating a custom monitoring label for virtual machines To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine's YAML file. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Access to the web console for stop and restart a virtual machine. Procedure Edit the template spec of your virtual machine configuration file. In this example, the label monitor has the value metrics . spec: template: metadata: labels: monitor: metrics Stop and restart the virtual machine to create a new pod with the label name given to the monitor label. 14.3.4.3.1. Querying the node-exporter service for metrics Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Obtain the HTTP service endpoint by specifying the namespace for the service: USD oc get service -n <namespace> <node-exporter-service> To list all available metrics for the node-exporter service, query the metrics resource. USD curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^USD" Example output node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0 14.3.4.4. Creating a ServiceMonitor resource for the node exporter service You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Create a YAML file for the ServiceMonitor resource configuration. In this example, the service monitor matches any service with the label metrics and queries the exmet port every 30 seconds. apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics 1 The name of the ServiceMonitor . 2 The namespace where the ServiceMonitor is created. 3 The interval at which the port will be queried. 4 The name of the port that is queried every 30 seconds Create the ServiceMonitor configuration for the node-exporter service. USD oc create -f node-exporter-metrics-monitor.yaml 14.3.4.4.1. Accessing the node exporter service outside the cluster You can access the node-exporter service outside the cluster and view the exposed metrics. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Expose the node-exporter service. USD oc expose service -n <namespace> <node_exporter_service_name> Obtain the FQDN (Fully Qualified Domain Name) for the route. USD oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host Example output NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org Use the curl command to display metrics for the node-exporter service. USD curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics Example output go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423 14.3.4.5. Additional resources Configuring the monitoring stack Enabling monitoring for user-defined projects Managing metrics Reviewing monitoring dashboards Monitoring application health by using health checks Creating and using config maps Controlling virtual machine states 14.3.5. Virtual machine health checks You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine resource. 14.3.5.1. About readiness and liveness probes Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive. A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready. A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness. You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests: HTTP GET The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized. TCP socket The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. Guest agent ping The probe uses the guest-ping command to determine if the QEMU guest agent is running on the virtual machine. 14.3.5.1.1. Defining an HTTP readiness probe Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration. Procedure Include details of the readiness probe in the VM configuration file. Sample readiness probe with an HTTP GET test # ... spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8 # ... 1 The HTTP GET request to perform to connect to the VM. 2 The port of the VM that the probe queries. In the above example, the probe queries port 1500. 3 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints. 4 The time, in seconds, after the VM starts before the readiness probe is initiated. 5 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 6 The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . 7 The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready . 8 The number of times that the probe must report success, after a failure, to be considered successful. The default is 1. Create the VM by running the following command: USD oc create -f <file_name>.yaml 14.3.5.1.2. Defining a TCP readiness probe Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration. Procedure Include details of the TCP readiness probe in the VM configuration file. Sample readiness probe with a TCP socket test # ... spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5 # ... 1 The time, in seconds, after the VM starts before the readiness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The TCP action to perform. 4 The port of the VM that the probe queries. 5 The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VM by running the following command: USD oc create -f <file_name>.yaml 14.3.5.1.3. Defining an HTTP liveness probe Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test. Procedure Include details of the HTTP liveness probe in the VM configuration file. Sample liveness probe with an HTTP GET test # ... spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6 # ... 1 The time, in seconds, after the VM starts before the liveness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The HTTP GET request to perform to connect to the VM. 4 The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init. 5 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. 6 The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VM by running the following command: USD oc create -f <file_name>.yaml 14.3.5.2. Defining a watchdog You can define a watchdog to monitor the health of the guest operating system by performing the following steps: Configure a watchdog device for the virtual machine (VM). Install the watchdog agent on the guest. The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive: poweroff : The VM powers down immediately. If spec.running is set to true or spec.runStrategy is not set to manual , then the VM reboots. reset : The VM reboots in place and the guest operating system cannot react. Note The reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time. shutdown : The VM gracefully powers down by stopping all services. Note Watchdog is not available for Windows VMs. 14.3.5.2.1. Configuring a watchdog device for the virtual machine You configure a watchdog device for the virtual machine (VM). Prerequisites The VM must have kernel support for an i6300esb watchdog device. Red Hat Enterprise Linux (RHEL) images support i6300esb . Procedure Create a YAML file with the following contents: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 # ... 1 Specify poweroff , reset , or shutdown . The example above configures the i6300esb watchdog device on a RHEL8 VM with the poweroff action and exposes the device as /dev/watchdog . This device can now be used by the watchdog binary. Apply the YAML file to your cluster by running the following command: USD oc apply -f <file_name>.yaml Important This procedure is provided for testing watchdog functionality only and must not be run on production machines. Run the following command to verify that the VM is connected to the watchdog device: USD lspci | grep watchdog -i Run one of the following commands to confirm the watchdog is active: Trigger a kernel panic: # echo c > /proc/sysrq-trigger Stop the watchdog service: # pkill -9 watchdog 14.3.5.2.2. Installing the watchdog agent on the guest You install the watchdog agent on the guest and start the watchdog service. Procedure Log in to the virtual machine as root user. Install the watchdog package and its dependencies: # yum install watchdog Uncomment the following line in the /etc/watchdog.conf file and save the changes: #watchdog-device = /dev/watchdog Enable the watchdog service to start on boot: # systemctl enable --now watchdog.service 14.3.5.3. Defining a guest agent ping probe Define a guest agent ping probe by setting the spec.readinessProbe.guestAgentPing field of the virtual machine (VM) configuration. Important The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The QEMU guest agent must be installed and enabled on the virtual machine. Procedure Include details of the guest agent ping probe in the VM configuration file. For example: Sample guest agent ping probe # ... spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6 # ... 1 The guest agent ping probe to connect to the VM. 2 Optional: The time, in seconds, after the VM starts before the guest agent probe is initiated. 3 Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 4 Optional: The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . 5 Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready . 6 Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1. Create the VM by running the following command: USD oc create -f <file_name>.yaml 14.3.5.4. Additional resources Monitoring application health by using health checks 14.4. Troubleshooting OpenShift Virtualization provides tools and logs for troubleshooting virtual machines and virtualization components. You can troubleshoot OpenShift Virtualization components by using the tools provided in the web console or by using the oc CLI tool. 14.4.1. Events OpenShift Container Platform events are records of important life-cycle information and are useful for monitoring and troubleshooting virtual machine, namespace, and resource issues. VM events: Navigate to the Events tab of the VirtualMachine details page in the web console. Namespace events You can view namespace events by running the following command: USD oc get events -n <namespace> See the list of events for details about specific events. Resource events You can view resource events by running the following command: USD oc describe <resource> <resource_name> 14.4.2. Logs You can review the following logs for troubleshooting: Virtual machine OpenShift Virtualization pod Aggregated OpenShift Virtualization logs 14.4.2.1. Viewing virtual machine logs with the web console You can view virtual machine logs with the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines . Select a virtual machine to open the VirtualMachine details page. On the Details tab, click the pod name to open the Pod details page. Click the Logs tab to view the logs. 14.4.2.2. Viewing OpenShift Virtualization pod logs You can view logs for OpenShift Virtualization pods by using the oc CLI tool. You can configure the verbosity level of the logs by editing the HyperConverged custom resource (CR). 14.4.2.2.1. Viewing OpenShift Virtualization pod logs with the CLI You can view logs for the OpenShift Virtualization pods by using the oc CLI tool. Procedure View a list of pods in the OpenShift Virtualization namespace by running the following command: USD oc get pods -n openshift-cnv Example 14.3. Example output NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m View the pod log by running the following command: USD oc logs -n openshift-cnv <pod_name> Note If a pod fails to start, you can use the -- option to view logs from the last attempt. To monitor log output in real time, use the -f option. Example 14.4. Example output {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"} {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"} {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"} {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"} {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"} {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"} 14.4.2.2.2. Configuring OpenShift Virtualization pod log verbosity You can configure the verbosity level of OpenShift Virtualization pod logs by editing the HyperConverged custom resource (CR). Procedure To set log verbosity for specific components, open the HyperConverged CR in your default text editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the log level for one or more components by editing the spec.logVerbosityConfig stanza. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6 1 The log verbosity value must be an integer in the range 1-9 , where a higher number indicates a more detailed log. In this example, the virtAPI component logs are exposed if their priority level is 5 or higher. Apply your changes by saving and exiting the editor. 14.4.2.2.3. Common error messages The following error messages might appear in OpenShift Virtualization logs: ErrImagePull or ImagePullBackOff Indicates an incorrect deployment configuration or problems with the images that are referenced. 14.4.2.3. Viewing aggregated OpenShift Virtualization logs with the LokiStack You can view aggregated logs for OpenShift Virtualization pods and containers by using the LokiStack in the web console. Prerequisites You deployed the LokiStack. Procedure Navigate to Observe Logs in the web console. Select application , for virt-launcher pod logs, or infrastructure , for OpenShift Virtualization control plane pods and containers, from the log type list. Click Show Query to display the query field. Enter the LogQL query in the query field and click Run Query to display the filtered logs. 14.4.2.3.1. OpenShift Virtualization LogQL queries You can view and filter aggregated logs for OpenShift Virtualization components by running Loki Query Language (LogQL) queries on the Observe Logs page in the web console. The default log type is infrastructure . The virt-launcher log type is application . Optional: You can include or exclude strings or regular expressions by using line filter expressions. Note If the query matches a large number of logs, the query might time out. Table 14.4. OpenShift Virtualization LogQL example queries Component LogQL query All {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" cdi-apiserver cdi-deployment cdi-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="storage" hco-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="deployment" kubemacpool {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="network" virt-api virt-controller virt-handler virt-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="compute" ssp-operator {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |kubernetes_labels_app_kubernetes_io_component="schedule" Container {log_type=~".+",kubernetes_container_name=~"<container>|<container>"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" 1 Specify one or more containers separated by a pipe ( | ). virt-launcher You must select application from the log type list before running this query. {log_type=~".+", kubernetes_container_name="compute"}|json |!= "custom-ga-command" 1 1 |!= "custom-ga-command" excludes libvirt logs that contain the string custom-ga-command . ( BZ#2177684 ) You can filter log lines to include or exclude strings or regular expressions by using line filter expressions. Table 14.5. Line filter expressions Line filter expression Description |= "<string>" Log line contains string != "<string>" Log line does not contain string |~ "<regex>" Log line contains regular expression !~ "<regex>" Log line does not contain regular expression Example line filter expression {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |= "error" != "timeout" 14.4.2.3.2. Additional resources for LokiStack and LogQL About log storage Deploying the LokiStack LogQL log queries in the Grafana documentation 14.4.3. Troubleshooting data volumes You can check the Conditions and Events sections of the DataVolume object to analyze and resolve issues. 14.4.3.1. About data volume conditions and events You can diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command: USD oc describe dv <DataVolume> The Conditions section displays the following Types : Bound Running Ready The Events section provides the following additional information: Type of event Reason for logging Source of the event Message containing additional diagnostic information. The output from oc describe does not always contains Events . An event is generated when the Status , Reason , or Message changes. Both conditions and events react to changes in the state of the data volume. For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well. 14.4.3.2. Analyzing data volume conditions and events By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state. There are many different combinations of conditions. Each must be evaluated in its unique context. Examples of various combinations follow. Bound - A successfully bound PVC displays in this example. Note that the Type is Bound , so the Status is True . If the PVC is not bound, the Status is False . When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True . The Message indicates which PVC owns the data volume. Message , in the Events section, provides further details including how long the PVC has been bound ( Age ) and by what resource ( From ), in this case datavolume-controller : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound Running - In this case, note that Type is Running and Status is False , indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False . However, note that Reason is Completed and the Message field indicates Import Complete . In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404 , listed in the Events section's first Warning . From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume: Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found Ready - If Type is Ready and Status is True , then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, the Status is False : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready 14.5. OpenShift Virtualization runbooks Runbooks for the OpenShift Virtualization Operator are maintained in the openshift/runbooks Git repository, and you can view them on GitHub. To diagnose and resolve issues that trigger OpenShift Virtualization alerts , follow the procedures in the runbooks. OpenShift Virtualization alerts are displayed in the Virtualization Overview tab in the web console. 14.5.1. CDIDataImportCronOutdated View the runbook for the CDIDataImportCronOutdated alert. 14.5.2. CDIDataVolumeUnusualRestartCount View the runbook for the CDIDataVolumeUnusualRestartCount alert. 14.5.3. CDIDefaultStorageClassDegraded View the runbook for the CDIDefaultStorageClassDegraded alert. 14.5.4. CDIMultipleDefaultVirtStorageClasses View the runbook for the CDIMultipleDefaultVirtStorageClasses alert. 14.5.5. CDINoDefaultStorageClass View the runbook for the CDINoDefaultStorageClass alert. 14.5.6. CDINotReady View the runbook for the CDINotReady alert. 14.5.7. CDIOperatorDown View the runbook for the CDIOperatorDown alert. 14.5.8. CDIStorageProfilesIncomplete View the runbook for the CDIStorageProfilesIncomplete alert. 14.5.9. CnaoDown View the runbook for the CnaoDown alert. 14.5.10. CnaoNMstateMigration View the runbook for the CnaoNMstateMigration alert. 14.5.11. HCOInstallationIncomplete View the runbook for the HCOInstallationIncomplete alert. 14.5.12. HPPNotReady View the runbook for the HPPNotReady alert. 14.5.13. HPPOperatorDown View the runbook for the HPPOperatorDown alert. 14.5.14. HPPSharingPoolPathWithOS View the runbook for the HPPSharingPoolPathWithOS alert. 14.5.15. KubemacpoolDown View the runbook for the KubemacpoolDown alert. 14.5.16. KubeMacPoolDuplicateMacsFound View the runbook for the KubeMacPoolDuplicateMacsFound alert. 14.5.17. KubeVirtComponentExceedsRequestedCPU The KubeVirtComponentExceedsRequestedCPU alert is deprecated . 14.5.18. KubeVirtComponentExceedsRequestedMemory The KubeVirtComponentExceedsRequestedMemory alert is deprecated . 14.5.19. KubeVirtCRModified View the runbook for the KubeVirtCRModified alert. 14.5.20. KubeVirtDeprecatedAPIRequested View the runbook for the KubeVirtDeprecatedAPIRequested alert. 14.5.21. KubeVirtNoAvailableNodesToRunVMs View the runbook for the KubeVirtNoAvailableNodesToRunVMs alert. 14.5.22. KubevirtVmHighMemoryUsage View the runbook for the KubevirtVmHighMemoryUsage alert. 14.5.23. KubeVirtVMIExcessiveMigrations View the runbook for the KubeVirtVMIExcessiveMigrations alert. 14.5.24. LowKVMNodesCount View the runbook for the LowKVMNodesCount alert. 14.5.25. LowReadyVirtControllersCount View the runbook for the LowReadyVirtControllersCount alert. 14.5.26. LowReadyVirtOperatorsCount View the runbook for the LowReadyVirtOperatorsCount alert. 14.5.27. LowVirtAPICount View the runbook for the LowVirtAPICount alert. 14.5.28. LowVirtControllersCount View the runbook for the LowVirtControllersCount alert. 14.5.29. LowVirtOperatorCount View the runbook for the LowVirtOperatorCount alert. 14.5.30. NetworkAddonsConfigNotReady View the runbook for the NetworkAddonsConfigNotReady alert. 14.5.31. NoLeadingVirtOperator View the runbook for the NoLeadingVirtOperator alert. 14.5.32. NoReadyVirtController View the runbook for the NoReadyVirtController alert. 14.5.33. NoReadyVirtOperator View the runbook for the NoReadyVirtOperator alert. 14.5.34. OrphanedVirtualMachineInstances View the runbook for the OrphanedVirtualMachineInstances alert. 14.5.35. OutdatedVirtualMachineInstanceWorkloads View the runbook for the OutdatedVirtualMachineInstanceWorkloads alert. 14.5.36. SingleStackIPv6Unsupported View the runbook for the SingleStackIPv6Unsupported alert. 14.5.37. SSPCommonTemplatesModificationReverted View the runbook for the SSPCommonTemplatesModificationReverted alert. 14.5.38. SSPDown View the runbook for the SSPDown alert. 14.5.39. SSPFailingToReconcile View the runbook for the SSPFailingToReconcile alert. 14.5.40. SSPHighRateRejectedVms View the runbook for the SSPHighRateRejectedVms alert. 14.5.41. SSPTemplateValidatorDown View the runbook for the SSPTemplateValidatorDown alert. 14.5.42. UnsupportedHCOModification View the runbook for the UnsupportedHCOModification alert. 14.5.43. VirtAPIDown View the runbook for the VirtAPIDown alert. 14.5.44. VirtApiRESTErrorsBurst View the runbook for the VirtApiRESTErrorsBurst alert. 14.5.45. VirtApiRESTErrorsHigh View the runbook for the VirtApiRESTErrorsHigh alert. 14.5.46. VirtControllerDown View the runbook for the VirtControllerDown alert. 14.5.47. VirtControllerRESTErrorsBurst View the runbook for the VirtControllerRESTErrorsBurst alert. 14.5.48. VirtControllerRESTErrorsHigh View the runbook for the VirtControllerRESTErrorsHigh alert. 14.5.49. VirtHandlerDaemonSetRolloutFailing View the runbook for the VirtHandlerDaemonSetRolloutFailing alert. 14.5.50. VirtHandlerRESTErrorsBurst View the runbook for the VirtHandlerRESTErrorsBurst alert. 14.5.51. VirtHandlerRESTErrorsHigh View the runbook for the VirtHandlerRESTErrorsHigh alert. 14.5.52. VirtOperatorDown View the runbook for the VirtOperatorDown alert. 14.5.53. VirtOperatorRESTErrorsBurst View the runbook for the VirtOperatorRESTErrorsBurst alert. 14.5.54. VirtOperatorRESTErrorsHigh View the runbook for the VirtOperatorRESTErrorsHigh alert. 14.5.55. VirtualMachineCRCErrors The runbook for the VirtualMachineCRCErrors alert is deprecated because the alert was renamed to VMStorageClassWarning . View the runbook for the VMStorageClassWarning alert. 14.5.56. VMCannotBeEvicted View the runbook for the VMCannotBeEvicted alert. 14.5.57. VMStorageClassWarning View the runbook for the VMStorageClassWarning alert.
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 -- /usr/bin/gather", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 -- PROS=5 /usr/bin/gather 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.11 /usr/bin/gather --images", "--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: [\"kubevirt.io\"] resources: [\"virtualmachineinstances\"] verbs: [\"get\", \"create\", \"delete\"] - apiGroups: [\"subresources.kubevirt.io\"] resources: [\"virtualmachineinstances/console\"] verbs: [\"get\"] - apiGroups: [\"k8s.cni.cncf.io\"] resources: [\"network-attachment-definitions\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io", "oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" 1 spec.param.maxDesiredLatencyMilliseconds: \"10\" 2 spec.param.sampleDurationSeconds: \"5\" 3 spec.param.sourceNode: \"worker1\" 4 spec.param.targetNode: \"worker2\" 5", "oc apply -n <target_namespace> -f <latency_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.13.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <latency_job>.yaml", "oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m", "oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" spec.param.maxDesiredLatencyMilliseconds: \"10\" spec.param.sampleDurationSeconds: \"5\" spec.param.sourceNode: \"worker1\" spec.param.targetNode: \"worker2\" status.succeeded: \"true\" status.failureReason: \"\" status.completionTimestamp: \"2022-01-01T09:00:00Z\" status.startTimestamp: \"2022-01-01T09:00:07Z\" status.result.avgLatencyNanoSec: \"177000\" status.result.maxLatencyNanoSec: \"244000\" 1 status.result.measurementDurationSec: \"5\" status.result.minLatencyNanoSec: \"135000\" status.result.sourceNode: \"worker1\" status.result.targetNode: \"worker2\"", "oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>", "oc delete job -n <target_namespace> kubevirt-vm-latency-checkup", "oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config", "oc delete -f <latency_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"get\", \"update\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"create\", \"get\", \"delete\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/console\" ] verbs: [ \"get\" ] - apiGroups: [ \"\" ] resources: [ \"pods\" ] verbs: [ \"create\", \"get\", \"delete\" ] - apiGroups: [ \"\" ] resources: [ \"pods/exec\" ] verbs: [ \"create\" ] - apiGroups: [ \"k8s.cni.cncf.io\" ] resources: [ \"network-attachment-definitions\" ] verbs: [ \"get\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker --- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-traffic-gen-sa", "oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml", "apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: dpdk-checkup-traffic-gen allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - IPC_LOCK - NET_ADMIN - NET_RAW - SYS_RESOURCE defaultAddCapabilities: null fsGroup: type: RunAsAny groups: [] readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - runtime/default - unconfined supplementalGroups: type: RunAsAny users: - system:serviceaccount:dpdk-checkup-ns:dpdk-checkup-traffic-gen-sa", "oc apply -f <dpdk_scc>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGeneratorRuntimeClassName: <runtimeclass_name> 2 spec.param.trafficGeneratorImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1\" 3 spec.param.vmContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1\" 4", "oc apply -n <target_namespace> -f <dpdk_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.13.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <dpdk_job>.yaml", "oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 1h2m spec.param.NetworkAttachmentDefinitionName: \"mlx-dpdk-network-1\" spec.param.trafficGeneratorRuntimeClassName: performance-performance-zeus10 spec.param.trafficGeneratorImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1\" spec.param.vmContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1\" status.succeeded: true status.failureReason: \" \" status.startTimestamp: 2022-12-21T09:33:06+00:00 status.completionTimestamp: 2022-12-21T11:33:06+00:00 status.result.actualTrafficGeneratorTargetNode: worker-dpdk1 status.result.actualDPDKVMTargetNode: worker-dpdk2 status.result.dropRate: 0", "oc delete job -n <target_namespace> dpdk-checkup", "oc delete config-map -n <target_namespace> dpdk-checkup-config", "oc delete -f <dpdk_sa_roles_rolebinding>.yaml", "dnf install libguestfs-tools", "composer-cli distros list", "usermod -a -G weldr user", "newgrp weldr", "cat << EOF > dpdk-vm.toml name = \"dpdk_image\" description = \"Image to use with the DPDK checkup\" version = \"0.0.1\" distro = \"rhel-87\" [[packages]] name = \"dpdk\" [[packages]] name = \"dpdk-tools\" [[packages]] name = \"driverctl\" [[packages]] name = \"tuned-profiles-cpu-partitioning\" [customizations.kernel] append = \"default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7\" [customizations.services] disabled = [\"NetworkManager-wait-online\", \"sshd\"] EOF", "composer-cli blueprints push dpdk-vm.toml", "composer-cli compose start dpdk_image qcow2", "composer-cli compose status", "composer-cli compose image <UUID>", "cat <<EOF >customize-vm echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf tuned-adm profile cpu-partitioning echo \"options vfio enable_unsafe_noiommu_mode=1\" > /etc/modprobe.d/vfio-noiommu.conf EOF", "cat <<EOF >first-boot driverctl set-override 0000:06:00.0 vfio-pci driverctl set-override 0000:07:00.0 vfio-pci mkdir /mnt/huge mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB EOF", "virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabel", "cat << EOF > Dockerfile FROM scratch COPY <uuid>-disk.qcow2 /disk/ EOF", "podman build . -t dpdk-rhel:latest", "podman push dpdk-rhel:latest", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6", "oc create -f <file_name>.yaml", "oc get events -n <namespace>", "oc describe <resource> <resource_name>", "oc get pods -n openshift-cnv", "NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m", "oc logs -n openshift-cnv <pod_name>", "{\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373695Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373726Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"setting rate limiter to 5 QPS and 10 Burst\",\"pos\":\"virt-handler.go:462\",\"timestamp\":\"2022-04-17T08:58:37.373782Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]\",\"pos\":\"cpu_plugin.go:96\",\"timestamp\":\"2022-04-17T08:58:37.390221Z\"} {\"component\":\"virt-handler\",\"level\":\"warning\",\"msg\":\"host model mode is expected to contain only one model\",\"pos\":\"cpu_plugin.go:103\",\"timestamp\":\"2022-04-17T08:58:37.390263Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"node-labeller is running\",\"pos\":\"node_labeller.go:94\",\"timestamp\":\"2022-04-17T08:58:37.391011Z\"}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"storage\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"deployment\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"network\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"compute\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"schedule\"", "{log_type=~\".+\",kubernetes_container_name=~\"<container>|<container>\"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\", kubernetes_container_name=\"compute\"}|json |!= \"custom-ga-command\" 1", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |= \"error\" != \"timeout\"", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/virtualization/support
Chapter 1. Introduction to the Red Hat Quay Operator
Chapter 1. Introduction to the Red Hat Quay Operator Use the content in this chapter to execute the following: Install Red Hat Quay on OpenShift Container Platform using the Red Hat Quay Operator Configure managed, or unmanaged, object storage Configure unmanaged components, such as the database, Redis, routes, TLS, and so on Deploy the Red Hat Quay registry on OpenShift Container Platform using the Red Hat Quay Operator Use advanced features supported by Red Hat Quay Upgrade the Red Hat Quay registry by using the Red Hat Quay Operator 1.1. Red Hat Quay Operator components Red Hat Quay has many dependencies. These dependencies include a database, object storage, Redis, and others. The Red Hat Quay Operator manages an opinionated deployment of Red Hat Quay and its dependencies on Kubernetes. These dependencies are treated as components and are configured through the QuayRegistry API. In the QuayRegistry custom resource, the spec.components field configures components. Each component contains two fields: kind (the name of the component), and managed (a boolean that addresses whether the component lifecycle is handled by the Red Hat Quay Operator). By default, all components are managed and auto-filled upon reconciliation for visibility: Example QuayRegistry resource apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: quay managed: true - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true 1.2. Using managed components Unless your QuayRegistry custom resource specifies otherwise, the Red Hat Quay Operator uses defaults for the following managed components: quay: Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, for example, environment variables and number of replicas. This component is new as of Red Hat Quay 3.7 and cannot be set to unmanaged. postgres: For storing the registry metadata, As of Red Hat Quay 3.9, uses a version of PostgreSQL 13 from Software Collections . Note When upgrading from Red Hat Quay 3.8 3.9, the Operator automatically handles upgrading PostgreSQL 10 to PostgreSQL 13. This upgrade is required. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. clair: Provides image vulnerability scanning. redis: Stores live builder logs and the Red Hat Quay tutorial. Also includes the locking mechanism that is required for garbage collection. horizontalpodautoscaler: Adjusts the number of Quay pods depending on memory/cpu consumption. objectstorage: For storing image layer blobs, utilizes the ObjectBucketClaim Kubernetes API which is provided by Noobaa or Red Hat OpenShift Data Foundation. route: Provides an external entrypoint to the Red Hat Quay registry from outside of OpenShift Container Platform. mirror: Configures repository mirror workers to support optional repository mirroring. monitoring: Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting Quay pods. tls: Configures whether Red Hat Quay or OpenShift Container Platform handles SSL/TLS. clairpostgres: Configures a managed Clair database. This is a separate database than the PostgreSQL database used to deploy Red Hat Quay. The Red Hat Quay Operator handles any required configuration and installation work needed for Red Hat Quay to use the managed components. If the opinionated deployment performed by the Red Hat Quay Operator is unsuitable for your environment, you can provide the Red Hat Quay Operator with unmanaged resources, or overrides, as described in the following sections. 1.3. Using unmanaged components for dependencies If you have existing components such as PostgreSQL, Redis, or object storage that you want to use with Red Hat Quay, you first configure them within the Red Hat Quay configuration bundle, or the config.yaml file. Then, they must be referenced in your QuayRegistry bundle as a Kubernetes Secret while indicating which components are unmanaged. Note If you are using an unmanaged PostgreSQL database, and the version is PostgreSQL 10, it is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy . See the following sections for configuring unmanaged components: Using an existing PostgreSQL database Using unmanaged Horizontal Pod Autoscalers Using unmanaged storage Using an unmanaged NooBaa instance Using an unmanaged Redis database Disabling the route component Disabling the monitoring component Disabling the mirroring component 1.4. Config bundle secret The spec.configBundleSecret field is a reference to the metadata.name of a Secret in the same namespace as the QuayRegistry resource. This Secret must contain a config.yaml key/value pair. The config.yaml file is a Red Hat Quay config.yaml file. This field is optional, and is auto-filled by the Red Hat Quay Operator if not provided. If provided, it serves as the base set of config fields which are later merged with other fields from any managed components to form a final output Secret , which is then mounted into the Red Hat Quay application pods. 1.5. Prerequisites for Red Hat Quay on OpenShift Container Platform Consider the following prerequisites prior to deploying Red Hat Quay on OpenShift Container Platform using the Red Hat Quay Operator. 1.5.1. OpenShift Container Platform cluster To deploy the Red Hat Quay Operator, you must have an OpenShift Container Platform 4.5 or later cluster and access to an administrative account. The administrative account must have the ability to create namespaces at the cluster scope. 1.5.2. Resource Requirements Each Red Hat Quay application pod has the following resource requirements: 8 Gi of memory 2000 millicores of CPU The Red Hat Quay Operator creates at least one application pod per Red Hat Quay deployment it manages. Ensure your OpenShift Container Platform cluster has sufficient compute resources for these requirements. 1.5.3. Object Storage By default, the Red Hat Quay Operator uses the ObjectBucketClaim Kubernetes API to provision object storage. Consuming this API decouples the Red Hat Quay Operator from any vendor-specific implementation. Red Hat OpenShift Data Foundation provides this API through its NooBaa component, which is used as an example throughout this documentation. Red Hat Quay can be manually configured to use multiple storage cloud providers, including the following: Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Red Hat Quay) Microsoft Azure Blob Storage Google Cloud Storage Ceph Object Gateway (RADOS) OpenStack Swift CloudFront + S3 For a complete list of object storage providers, the Quay Enterprise 3.x support matrix . 1.5.4. StorageClass When deploying Quay and Clair PostgreSQL databases using the Red Hat Quay Operator, a default StorageClass is configured in your cluster. The default StorageClass used by the Red Hat Quay Operator provisions the Persistent Volume Claims required by the Quay and Clair databases. These PVCs are used to store data persistently, ensuring that your Red Hat Quay registry and Clair vulnerability scanner remain available and maintain their state across restarts or failures. Before proceeding with the installation, verify that a default StorageClass is configured in your cluster to ensure seamless provisioning of storage for Quay and Clair components.
[ "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: quay managed: true - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-concepts
Chapter 2. Security Tips for Installation
Chapter 2. Security Tips for Installation Security begins with the first time you put that CD or DVD into your disk drive to install Red Hat Enterprise Linux 7. Configuring your system securely from the beginning makes it easier to implement additional security settings later. 2.1. Securing BIOS Password protection for the BIOS (or BIOS equivalent) and the boot loader can prevent unauthorized users who have physical access to systems from booting using removable media or obtaining root privileges through single user mode. The security measures you should take to protect against such attacks depends both on the sensitivity of the information on the workstation and the location of the machine. For example, if a machine is used in a trade show and contains no sensitive information, then it may not be critical to prevent such attacks. However, if an employee's laptop with private, unencrypted SSH keys for the corporate network is left unattended at that same trade show, it could lead to a major security breach with ramifications for the entire company. If the workstation is located in a place where only authorized or trusted people have access, however, then securing the BIOS or the boot loader may not be necessary. 2.1.1. BIOS Passwords The two primary reasons for password protecting the BIOS of a computer are [1] : Preventing Changes to BIOS Settings - If an intruder has access to the BIOS, they can set it to boot from a CD-ROM or a flash drive. This makes it possible for them to enter rescue mode or single user mode, which in turn allows them to start arbitrary processes on the system or copy sensitive data. Preventing System Booting - Some BIOSes allow password protection of the boot process. When activated, an attacker is forced to enter a password before the BIOS launches the boot loader. Because the methods for setting a BIOS password vary between computer manufacturers, consult the computer's manual for specific instructions. If you forget the BIOS password, it can either be reset with jumpers on the motherboard or by disconnecting the CMOS battery. For this reason, it is good practice to lock the computer case if possible. However, consult the manual for the computer or motherboard before attempting to disconnect the CMOS battery. 2.1.1.1. Securing Non-BIOS-based Systems Other systems and architectures use different programs to perform low-level tasks roughly equivalent to those of the BIOS on x86 systems. For example, the Unified Extensible Firmware Interface ( UEFI ) shell. For instructions on password protecting BIOS-like programs, see the manufacturer's instructions. [1] Since system BIOSes differ between manufacturers, some may not support password protection of either type, while others may support one type but not the other.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/chap-security_tips_for_installation
Chapter 1. Red Hat Decision Manager versioning
Chapter 1. Red Hat Decision Manager versioning Red Hat Process Automation Manager versions are designated with a numerical Major.Minor.Patch format, such as 7.13.5. In this example, the major release is 7.x.x , the minor release is 7.13.x , and the patch release is 7.13.5. Major releases often require data migration, while minor release upgrades and patch updates are typically managed with update tools provided with the Red Hat Decision Manager release artifacts. Note Starting with release 7.13, the distribution files for Red Hat Decision Manager are replaced with Red Hat Process Automation Manager files. The following are the general types of releases for Red Hat Decision Manager: Major release migrations Major releases of Red Hat Decision Manager include substantial enhancements, security updates, bug fixes, and possibly redesigned features and functions. Data migration is typically required when an application is moved from one major release to another major release, such as from Red Hat JBoss BRMS 6.4.x to Red Hat Decision Manager 7.0. Automated migration tools are often provided with new major versions of Red Hat Decision Manager to facilitate migration, but some manual effort is likely required for certain data and configurations. The supported migration paths are specified in product announcements and documentation. For example migration instructions, see Migrating from Red Hat JBoss BRMS 6.4 to Red Hat Decision Manager 7.0 . Minor release upgrades Minor releases of Red Hat Decision Manager include enhancements, security updates, and bug fixes. Data migration may be required when an application is moved from one minor release to another minor release, such as from Red Hat Decision Manager 7.5.x to 7.6. Automated update tools are often provided with both patch updates and new minor versions of Red Hat Decision Manager to facilitate updating certain components of Red Hat Decision Manager, such as Business Central, KIE Server, and the headless Process Automation Manager controller. Other Red Hat Decision Manager artifacts, such as the decision engine and standalone Business Central, are released as new artifacts with each minor release and you must reinstall them to apply the update. Before you upgrade to a new minor release, apply the latest patch update to your current version of Red Hat Decision Manager to ensure that the minor release upgrade is successful. Patch updates Patch updates of Red Hat Decision Manager include the latest security updates and bug fixes. Scheduled patch updates contain all previously released patch updates for that minor version of the product, so you do not need to apply each patch update incrementally in order to apply the latest update. For example, you can update Red Hat Decision Manager 7.5.0 or 7.5.1 to Red Hat Decision Manager 7.5.2. However, for optimal Red Hat Decision Manager performance, apply product updates as they become available. Occasionally, Red Hat might release unscheduled patch updates outside the normal update cycle of the existing product. These may include security or other updates provided by Red Hat Global Support Services (GSS) to fix specific issues, and may not be cumulative updates. Automated update tools are often provided with both patch updates and new minor versions of Red Hat Decision Manager to facilitate updating certain components of Red Hat Decision Manager, such as Business Central, KIE Server, and the headless Process Automation Manager controller. Other Red Hat Decision Manager artifacts, such as the decision engine and standalone Business Central, are released as new artifacts with each minor release and you must reinstall them to apply the update. To ensure optimal transition between releases and to keep your Red Hat Decision Manager distribution current with the latest enhancements and fixes, apply new product releases and updates to Red Hat Decision Manager as they become available in the Red Hat Customer Portal. Consider also enabling product notifications in the Red Hat Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/about-ba-con_planning
2.3. Attaching Subsystems to, and Detaching Them from, an Existing Hierarchy
2.3. Attaching Subsystems to, and Detaching Them from, an Existing Hierarchy To add a subsystem to an existing hierarchy, detach it from an existing hierarchy, or move it to a different hierarchy, edit the mount section of the /etc/cgconfig.conf file as root, using the same syntax described in Section 2.2, "Creating a Hierarchy and Attaching Subsystems" . When cgconfig starts, it will reorganize the subsystems according to the hierarchies that you specify. Alternative method To add an unattached subsystem to an existing hierarchy, remount the hierarchy. Include the extra subsystem in the mount command, together with the remount option. Example 2.4. Remounting a hierarchy to add a subsystem The lssubsys command shows cpu , cpuset , and memory subsystems attached to the cpu_and_mem hierarchy: Remount the cpu_and_mem hierarchy, using the remount option, and include cpuacct in the list of subsystems: The lssubsys command now shows cpuacct attached to the cpu_and_mem hierarchy: Analogously, you can detach a subsystem from an existing hierarchy by remounting the hierarchy and omitting the subsystem name from the -o options. For example, to then detach the cpuacct subsystem, simply remount and omit it:
[ "~]# lssubsys -am cpu,cpuset,memory /cgroup/cpu_and_mem net_cls ns cpuacct devices freezer blkio", "~]# mount -t cgroup -o remount,cpu,cpuset,cpuacct,memory cpu_and_mem /cgroup/cpu_and_mem", "~]# lssubsys -am cpu,cpuacct,cpuset,memory /cgroup/cpu_and_mem net_cls ns devices freezer blkio", "~]# mount -t cgroup -o remount,cpu,cpuset,memory cpu_and_mem /cgroup/cpu_and_mem" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-attaching_subsystems_to_and_detaching_them_from_an_existing_hierarchy
Chapter 1. Introduction to Service Telemetry Framework release
Chapter 1. Introduction to Service Telemetry Framework release This release of Service Telemetry Framework (STF) provides new features and resolved issues specific to STF. STF uses components from other Red Hat products. For specific information pertaining to the support of these components, see https://access.redhat.com/site/support/policy/updates/openstack/platform/ and https://access.redhat.com/support/policy/updates/openshift/ . STF 1.5 is compatible with OpenShift Container Platform version 4.14 and 4.16 as the deployment platform. 1.1. Product support The Red Hat Customer Portal offers resources to guide you through the installation and configuration of Service Telemetry Framework. The following types of documentation are available through the Customer Portal: Product documentation Knowledge base articles and solutions Technical briefs Support case management You can access the Customer Portal at https://access.redhat.com/ .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/service_telemetry_framework_release_notes_1.5/assembly-introduction-to-service-telemetry-framework-release_osp
Appendix C. Using AMQ Broker with the examples
Appendix C. Using AMQ Broker with the examples The Red Hat build of Apache Qpid Proton DotNet examples require a running message broker with a queue named hello-world-example . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named hello-world-example . USD <broker-instance-dir> /bin/artemis queue create --name hello-world-example --address hello-world-example --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2023-12-07 15:25:52 UTC
[ "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name hello-world-example --address hello-world-example --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/using_the_broker_with_the_examples
Chapter 3. Installing Red Hat Developer Hub in an air-gapped environment with the Helm Chart
Chapter 3. Installing Red Hat Developer Hub in an air-gapped environment with the Helm Chart An air-gapped environment, also known as an air-gapped network or isolated network, ensures security by physically segregating the system or network. This isolation is established to prevent unauthorized access, data transfer, or communication between the air-gapped system and external sources. You can install Red Hat Developer Hub in an air-gapped environment to ensure security and meet specific regulatory requirements. To install Developer Hub in an air-gapped environment, you must have access to the registry.redhat.io and the registry for the air-gapped environment. Prerequisites You have installed an Red Hat OpenShift Container Platform 4.14 or later. You have access to the registry.redhat.io . You have access to the Red Hat OpenShift Container Platform image registry of your cluster. For more information about exposing the image registry, see the Red Hat OpenShift Container Platform documentation about Exposing the registry . You have installed the OpenShift CLI ( oc ) on your workstation. You have installed the podman command line tools on your workstation. You you have an account in Red Hat Developer portal. Procedure Log in to your OpenShift Container Platform account using the OpenShift CLI ( oc ), by running the following command: oc login -u <user> -p <password> https://api.<hostname>:6443 Log in to the OpenShift Container Platform image registry using the podman command line tool, by running the following command: podman login -u kubeadmin -p USD(oc whoami -t) default-route-openshift-image-registry.<hostname> Note You can run the following commands to get the full host name of the OpenShift Container Platform image registry, and then use the host name in a command to log in: REGISTRY_HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') podman login -u kubeadmin -p USD(oc whoami -t) USDREGISTRY_HOST Log in to the registry.redhat.io in podman by running the following command: podman login registry.redhat.io For more information about registry authentication, see Red Hat Container Registry Authentication . Pull Developer Hub and PostgreSQL images from Red Hat Image registry to your workstation, by running the following commands: podman pull registry.redhat.io/rhdh/rhdh-hub-rhel9:1.3 podman pull registry.redhat.io/rhel9/postgresql-15:latest Push both images to the internal OpenShift Container Platform image registry by running the following commands: podman push --remove-signatures registry.redhat.io/rhdh/rhdh-hub-rhel9:1.3 default-route-openshift-image-registry.<hostname>/<project_name>/rhdh-hub-rhel9:1.3 podman push --remove-signatures registry.redhat.io/rhel9/postgresql-15:latest default-route-openshift-image-registry.<hostname>/<project_name>/postgresql-15:latest For more information about pushing images directly to the OpenShift Container Platform image registry, see How do I push an Image directly into the OpenShift 4 registry . Important If an x509 error occurs, verify that you have installed the CA certificate used for OpenShift Container Platform routes on your system . Use the following command to verify that both images are present in the internal OpenShift Container Platform registry: oc get imagestream -n <project_name> Enable local image lookup for both images by running the following commands: oc set image-lookup postgresql-15 oc set image-lookup rhdh-hub-rhel9 Go to YAML view and update the image section for backstage and postgresql using the following values: Example values for Developer Hub image upstream: backstage: image: registry: "" repository: rhdh-hub-rhel9 tag: latest Example values for PostgreSQL image upstream: postgresql: image: registry: "" repository: postgresql-15 tag: latest Install the Red Hat Developer Hub using Helm chart.
[ "login -u <user> -p <password> https://api.<hostname>:6443", "login -u kubeadmin -p USD(oc whoami -t) default-route-openshift-image-registry.<hostname>", "REGISTRY_HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "login -u kubeadmin -p USD(oc whoami -t) USDREGISTRY_HOST", "login registry.redhat.io", "pull registry.redhat.io/rhdh/rhdh-hub-rhel9:1.3", "pull registry.redhat.io/rhel9/postgresql-15:latest", "push --remove-signatures registry.redhat.io/rhdh/rhdh-hub-rhel9:1.3 default-route-openshift-image-registry.<hostname>/<project_name>/rhdh-hub-rhel9:1.3", "push --remove-signatures registry.redhat.io/rhel9/postgresql-15:latest default-route-openshift-image-registry.<hostname>/<project_name>/postgresql-15:latest", "get imagestream -n <project_name>", "set image-lookup postgresql-15", "set image-lookup rhdh-hub-rhel9", "upstream: backstage: image: registry: \"\" repository: rhdh-hub-rhel9 tag: latest", "upstream: postgresql: image: registry: \"\" repository: postgresql-15 tag: latest" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_in_an_air-gapped_environment/proc-install-rhdh-airgapped-environment-ocp-helm_title-install-rhdh-air-grapped
Appendix A. Tests
Appendix A. Tests In this section we give more detailed information about each of the tests for hardware certification. Each test section uses the following format: What the test covers This section lists the types of hardware that this particular test is run on. RHEL version supported This section lists the versions of RHEL that the test is supported on. What the test does This section explains what the test scripts do. Remember, all the tests are python scripts and can be viewed in the directory /usr/lib/python2.7/site-packages/rhcert/suites/hwcert/tests if you want to know exactly what commands we are executing in the tests. Preparing for the test This section talks about the steps necessary to prepare for the test. For example, it talks about having a USB device on hand for the USB test and blank discs on hand for rewritable optical drive tests. Executing the test This section identifies whether the test is interactive or non-interactive and explains what command is necessary to run the test. You can choose either way to run the test: Follow Running the certification tests using CLI to run the test. Select the appropriate test name from the displayed list using the command: In case of hardware detection issues or other hardware-related problems during planning, follow Manually adding and running the tests . Run the rhcert-cli command by specifying the desired test name. Run Time This section explains how long a run of this test will take. Timing information for the supportable test is mentioned in each section as it is a required test for every run of the test suite. A.1. ACPI keys What the test covers The ACPI keys test captures a variety of input events from the system integrated keyboard. RHEL version supported RHEL 8.6 and later RHEL 9 What the test does The test captures the following: ACPI-related signals such as power, suspend, and sleep. Key presses that send signals associated with global keyboard shortcuts such as <Meta+E> , which opens the file browser. Executing the test The test is interactive. Run the following command and then select the appropriate ACPI keys test name from the list that displays. This test requires capturing all input events. During the test, press all the non-standard and multimedia keys on the device. Press the Escape key at any time to end the test and to see a list of keys. The test is successful if all the keys that you tested appear in the list. Run time The test takes less than 5 minutes to finish. Any other mandatory or selected tests will add to the overall run time. A.2. Audio What the test covers Removable sound cards and integrated sound devices are tested with the audio test. The test is scheduled when the hardware detection routines find the following strings in the udev database: You can see these strings and the strings that trigger the scheduling of the other tests in this guide in the output of the command udevadm info --export-db . What the test does The test plays a prerecorded sound (guitar chords or a recorded voice) while simultaneously recording it to a file, then it plays back the recording and asks if you could hear the sound. Preparing for the test Before you begin your test run, you should ensure that the audio test is scheduled and that the system can play and record sound. Contact your support contact at Red Hat for further assistance if the test does not appear on a system with installed audio devices. If the test is correctly scheduled, continue on to learn how to manually test the playback and record functions of your sound device. With built-in speakers present or speakers/headphones plugged into the headphone/line-out jack, playback can be confirmed before testing in these ways: In the Settings application click the Sound option. Click on the Output tab, select the sound card you want to test, and adjust the Output volume to an appropriate level. Click Test Speakers . In the Speaker Testing pop-up window, click the Test buttons to generate sounds. If no sound can be heard, ensure that the speakers are plugged in to the correct port. You can use any line-out or headphone jack (we have no requirement for which port you must use). Verify the sound is not muted and try adjusting the volume on the speakers and in the operating system itself. If the audio device has record capabilities, these should also be tested before attempting to run the test. Plug a microphone into one of the Line-in or Mic jacks on the system, or you can use the built-in microphone if you are testing a notebook. Again, we don't require you to use a specific input jack; provided that one works, the test will pass. In the Settings application click the Sound option. Click on the Input tab, select the appropriate input device, and adjust the Input volume to 100%. Speak into, tap, or otherwise activate the input device, and watch the Input level graphic. If you see it moving, the input device is set up properly. If it does not move, try another input selection or microphone port to plug the input device into. Contact your support person if you are unable to either hear sound or see the input level display move, as this will lead to a failure of the audio test. If you are able to successfully play sounds and see movement on the input level display when making sounds near the microphone, continue to the section to learn how to run the test. Executing the test The audio test is interactive. Before you execute a test run that includes an audio test, connect the microphone you used for your manual test and place it in front of the speakers, or ensure that the built-in microphone is free of obstructions. Alternatively, you can connect the line-out jack directly to the mic/line-in jack with a patch cable if you are testing in a noisy environment. Run the following command and then select the appropriate Audio test name from the list that displays. The interactive steps are as follows: The system will play sounds and ask if you heard them. Answer y or n as appropriate. If you decide to use a direct connection between output and input rather than speakers and a microphone, you will need to choose y for the answer regardless, as your speakers will be bypassed by the patch cable. The system will play back the file it recorded. If you heard the sound, answer y when prompted. Otherwise, answer n . Run time The audio test takes less than 1 minute for simultaneous playback and record, then the playback of the recorded sound. The required supportable test will add about a minute to the overall run time. A.3. backlight What the test covers The backlight test runs when it detects an attached display on the system and support of software backlight control is available. RHEL version supported RHEL 8 RHEL 9 What the test does The test ensures that the backlight control is functioning as expected by adjusting the display brightness of the attached display to the minimum and then maximum values. Preparing for the test Ensure that the host under test is running RHEL 8.0 or later. Ensure that the system has backlight support and the display is connected to the system. Executing the test The test is interactive. Run the following command and then select the appropriate backlight test name from the list that displays. The display brightness will change from the maximum to minimum and back. The test will pass after you confirm that the display brightness is changing as expected. Run time This test takes less than a minute to run. Any other mandatory or selected tests will add to the overall run time. A.4. Battery What the test covers The battery test is valid and can only be run on systems with built-in batteries. The test is not supported on external batteries that do not provide primary, internal power to the system, such as UPS or BIOS batteries. The test is scheduled when the hardware detection routines find the following string in the udev database: What the test does The test detects if the battery is connected, the AC adapter is plugged into the system, and the charging and discharging status of the battery. Preparing for the test Note Do not perform the test when the battery is at 100% charge. Discharge the battery to a lower level before executing the test to avoid potential test failures. Executing the test The test is interactive. Run the following command and then select the appropriate battery test name from the list that displays. The test detects the 10 mWh battery charge and discharge, displays the current capacity, and charging status of the battery. Follow the on-screen instructions to unplug and plug in the AC adapter when prompted. Run time The execution time of the test depends on the charging and discharging speed of the battery. Since this test is run on a laptop, the required supportable test will run along with its suspend test. Overall, it takes 7-10 minutes. A.5. bluetooth What the test covers The bluetooth test is supported on systems with a bluetooth v3, v4, or v5 controller. RHEL version supported RHEL 8 RHEL 9 What the test does The test uses rfkill command to check the availability of bluetooth controller in the system. It then runs hciconfig or btmgmt command to get the bluetooth controller version. Afterward, the test verifies if the controller can scan, discover, pair with, select and trust another selected and aligned bluetooth v3, v4, or v5 device by using the bluetoothctl command tool. If the HUT has multiple bluetooth controllers, the bluetooth test is planned automatically for each bluetooth controller. Preparing for the test Before you begin the test, ensure to: Have a device that supports the same or later versions of Bluetooth as the controller. Enable Bluetooth on both the HUT and the pairing device. Pair the devices manually by using the settings application to confirm connectivity. Unpair the devices. Executing the test The test is interactive. Run the following command and then select the appropriate bluetooth test name from the list that displays. Then, select the device for which you want to test the bluetooth functionality. Run time The test takes around 5 minutes to complete. However, the time can vary depending on the bluetooth network connectivity. A.6. Bluray What the test covers The Bluray test runs on the following media and related drive types: Read-only media and drives (BD-ROM) Write-once media and drives (BD-R) Rewritable media and drives (BD-RE) Based on information from the udev command, the test suite determines which optical drive tests (Blu-ray, DVD, or CD-ROM) to schedule and the type of media to test (read-only, writable, and rewritable). For example, the test suite will plan the following tests for a Blu-ray drive with rewriting capabilities that can also read DVD and CD-ROM discs: A rewrite (erase, write, and read) test for Blu-ray media A read test for DVD media A read test for CD-ROM media You only need to run the Bluray test once for a given drive. What the test does The test performs the following tasks depending on the capabilities of the drive: Read-only drives - First, it reads data from a disc and copies it to the hard disk. Then, it compares the data on the disc to the copy on the hard disk. If all file checksums match, the test passes. Drives with writing capabilities - First, it reads data from the hard disk and writes it to a writable blank disc. Then, it compares the data on the hard disk to the copy on the disc. If all file checksums match, the test passes. Drives with rewriting capabilities - First, it erases all information from the rewritable disc. Then, it reads data from the hard disk and writes it to the rewritable disc. Finally, it compares the data on the hard disk to the copy on the disc. If the erasing operation is successful and all file checksums match, the test passes. Executing the test The test is interactive. Run the following command and then select the appropriate Bluray test name from the list that displays. Follow the instructions on screen to insert the appropriate media and to close the drive's tray if appropriate. Run time The run time for the Bluray test depends on the speed of the media and the drive. For a 2x 25G BD-RE disc, the test finishes in approximately 14 minutes. A.7. CD ROM What the test covers The CD ROM test runs on the following media and related drive types: Read-only media and drives (CD-ROM) Write-once media and drives (CD-R) Rewritable media and drives (CD-RW) Based on information from the udev command, the test suite determines which optical drive tests (Blu-ray, DVD, or CD-ROM) to schedule and the type of media to test (read-only, writable, and rewritable). For example, the test suite will plan the following tests for a Blu-ray drive with rewriting capabilities that can also read DVD and CD-ROM discs: A rewrite (erase, write, and read) test for Blu-ray media A read test for DVD media A read test for CD-ROM media You only need to run the CD ROM test once for a given drive. What the test does The test performs the following tasks depending on the capabilities of the drive: Read-only drives - First, it reads data from a disc and copies it to the hard disk. Then, it compares the data on the disc to the copy on the hard disk. If all file checksums match, the test passes. Drives with writing capabilities - First, it reads data from the hard disk and writes it to a writable blank disc. Then, it compares the data on the hard disk to the copy on the disc. If all file checksums match, the test passes. Drives with rewriting capabilities - First, it erases all information from the rewritable disc. Then, it reads data from the hard disk and writes it to the rewritable disc. Finally, it compares the data on the hard disk to the copy on the disc. If the erasing operation is successful and all file checksums match, the test passes. Executing the test The test is interactive. Run the following command and then select the appropriate CD ROM test name from the list that displays. Follow the instructions on screen to insert the appropriate media and to close the drive's tray if appropriate. Run time The run time for the CD ROM test is dependent on the speed of the media and drive. For a 12x 714MB CD-RW disc, the test finishes in approximately 7 minutes. A.8. Core What the test covers The core test examines the system's CPUs and ensures that they are capable of functioning properly under load. What the test does The core test is actually composed of two separate routines. The first test is designed to detect clock jitter. Jitter is a condition that occurs when the system clocks are out of sync with each other. The system clocks are not the same as the CPU clock speed, which is just another way to refer to the speed at which the CPUs are operating. The jitter test uses the getimeofday() function to obtain the time as observed by each logical CPU and then analyzes the returned values. If all the CPU clocks are within .2 nanoseconds of each other, the test passes. The tolerances for the jitter test are very tight. In order to get good results it's important that the rhcert tests are the only loads running on a system at the time the test is executed. Any other compute loads that are present could interfere with the timing and cause the test to fail. The jitter test also checks to see which clock source the kernel is using. It will print a warning in the logs if an Intel processor is not using TSC, but this will not affect the PASS/FAIL status of the test. The second routine run in the core test is a CPU load test. It's the test provided by the required stress package. The stress program, which is available for use outside the rhcert suite if you are looking for a way to stress test a system, launches several simultaneous activities on the system and then monitors for any failures. Specifically it instructs each logical CPU to calculate square roots, it puts the system under memory pressure by using malloc() and free() routines to reserve and free memory respectively, and it forces writes to disk by calling sync() . These activities continue for 10 minutes, and if no failures occur within that time period, the test passes. Please see the stress manpage if you are interested in using it outside of hardware certification testing. Preparing for the test The only preparation for the core test is to install a CPU that meets the requirements that are stated in the Policy Guide. Executing the test The core test is non-interactive. Run the following command and then select the appropriate Core test name from the list that displays. Run time, bare-metal The core test itself takes about 12 minutes to run on a bare-metal system. The jitter portion of the test takes a minute or two and the stress portion runs for exactly 10 minutes. The required supportable test will add about a minute to the overall run time. Run time, full-virt guest The fv_core test takes slightly longer than the bare-metal version, about 14 minutes, to run in a KVM guest. The added time is due to guest startup/shutdown activities and the required supportable test that runs in the guest. The required supportable test on the bare-metal system will add about a minute to the overall run time. A.9. CPU scaling What the test covers The cpuscaling test examines a CPU's ability to increase and decrease its clock speed according to the compute demands placed on it. What the test does The test exercises the CPUs at varying frequencies using different scaling governors (the set of instructions that tell the CPU when to change to higher or lower clock speeds and how fast to do so) and measures the difference in the time that it takes to complete a standardized workload. The test is scheduled when the hardware detection routines find the following directories in /sys containing more than one cpu frequency: The cpuscaling test is planned once per package, rather than being listed once per logical CPU. When the test is run, it will determine topology via /sys/devices/system/cpu/cpu X /topology/physical_package_id , and run the test in parallel for all the logical CPUs in a particular package. The test runs the turbostat command first to gather the processor statistics. On supported architectures, turbostat checks if the advance statistics columns are visible in the turbostat output file, but returns a warning if the file does not contain the columns. The test then attempts to execute the cstate subtest and if it fails, executes pstate subtest. The test procedure for each CPU package is as follows: The test uses the values found in the sysfs filesystem to determine the maximum and minimum CPU frequencies. You can see these values for any system with this command: There will always be at least two frequencies displayed here, a maximum and a minimum, but some processors are capable of finer CPU speed control and will show more than two values in the file. Any additional CPU speeds between the max and min are not specifically used during the test, though they may be used as the CPU transitions between max and min frequencies. The test procedure is as follows: The test records the maximum and minimum processor speeds from the file /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies . The userspace governor is selected and maximum frequency is chosen. Maximum speed is confirmed by reading all processors' /sys/devices/system/cpu/cpu X /cpufreq/scaling_cur_freq value. If this value does not match the selected frequency, the test will report a failure. Every processor in the package is given the simultaneous task of calculating pi to 2x10^12 digits. The value for the pi calculation was chosen because it takes a meaningful amount of time to complete (about 30 seconds). The amount of time it took to calculate pi is recorded for each CPU, and an average is calculated for the package. The userspace governor is selected and the minimum speed is set. Minimum speed is confirmed by sysfs data, with a failure occurring if any CPU is not at the requested speed. The same pi calculation is performed by every processor in the package and the results recorded. The ondemand governor is chosen, which throttles the CPU between minimum and maximum speeds depending on workload. Minimum speed is confirmed by sysfs data, with a failure occurring if any CPU is not at the requested speed. The same pi calculation is performed by every processor in the package and the results recorded. The performance governor is chosen, which forces the CPU to maximum speed at all times. Maximum speed is confirmed by sysfs data, with a failure occurring if any CPU is not at the requested speed. The same pi calculation is performed by every processor processor and the results recorded. Now the analysis is performed on the three subsections. In steps one through eight we obtain the pi calculation times at maximum and minimum CPU speeds. The difference in the time it takes to calculate pi at the two speeds should be proportional to the difference in CPU speed. For example, if a hypothetical test system had a max frequency of 2GHz and a min of 1GHz and it took the system 30 seconds to run the pi calculation at max speed, we would expect the system to take 60 seconds at min speed to calculate pi. We know that for various reasons perfect results will not be obtained, so we allow for a 10% margin of error (faster or slower than expected) on the results. In our hypothetical example, this means that the minimum speed run could take between 54 and 66 seconds and still be considered a passing test (90% of 60 = 54 and 110% of 60 = 66). In steps nine through eleven, we test the pi calculation time using the ondemand governor. This confirms that the system can quickly increase the CPU speed to the maximum when work is being done. We take the calculation time obtained in step eleven and compare it to the maximum speed calculation time we obtained back in step five. A passing test has those two values differing by no more than 10%. In steps twelve through fourteen, we test the pi calculation using the performance governor. This confirms that the system can hold the CPU at maximum frequency at all times. We take the pi calculation time obtained in step 14 and compare it to the maximum speed calculation time we obtained back in step five. Again, a passing test has those two values differing by no more than 10%. An additional portion of the cpuscaling test runs when an Intel processor with the TurboBoost feature is detected by the presence of the ida CPU flag in /proc/cpuinfo . This test chooses one of the CPUs in each package, omitting CPU0 for housekeeping purposes, and measures the performance using the ondemand governor at maximum speed. It expects a result of at least 5% faster performance than the test, when all the cores in the package were being tested in parallel. Preparing for the test To prepare for the test, ensure that CPU frequency scaling is enabled in the BIOS and ensure that a CPU is installed that meets the requirements explained in the Policy Guide. Executing the test The cpuscaling test is non-interactive. Run the following command and then select the appropriate CPU scaling test name from the list that displays. Run time The cpuscaling test takes about 42 minutes for a 2013-era, single CPU, 6-core/12-thread 3.3GHz Intel-based workstation running Red Hat Enterprise Linux 6.4, AMD64 and Intel 64. Systems with higher core counts and more populated sockets will take longer. The required supportable test will add about a minute to the overall run time. A.10. DVD What the test covers The DVD test runs on the following media and related drive types: Read-only media and drives (DVD-ROM) Write-once media and drives (DVD+R and DVD-R) Rewritable media and drives (DVD+RW and DVD-RW) Based on information from the udev command, the test suite determines which optical drive tests (Blu-ray, DVD, or CD-ROM) to schedule and the type of media to test (read-only, writable, and rewritable). For example, the test suite will plan the following tests for a Blu-ray drive with rewriting capabilities that can also read DVD and CD-ROM discs: A rewrite (erase, write, and read) test for Blu-ray media A read test for DVD media A read test for CD-ROM media If your drives support both the DVD-RW and DVD+RW formats, you can use either type of disc during the test. You do not need to test both formats. Moreover, you only need to run the DVD test once for a given drive. What the test does The test performs the following tasks depending on the capabilities of the drive: Read-only drives - First, it reads data from a disc and copies it to the hard disk. Then, it compares the data on the disc to the copy on the hard disk. If all file checksums match, the test passes. Drives with writing capabilities - First, it reads data from the hard disk and writes it to a writable blank disc. Then, it compares the data on the hard disk to the copy on the disc. If all file checksums match, the test passes. Drives with rewriting capabilities - First, it erases all information from the rewritable disc. Then, it reads data from the hard disk and writes it to the rewritable disc. Finally, it compares the data on the hard disk to the copy on the disc. If the erasing operation is successful and all file checksums match, the test passes. Executing the test Run the following command and then select the appropriate DVD test name from the list that displays. Follow the instructions on screen to insert the appropriate media and to close the drive's tray if appropriate. Run time The run time for the DVD test is dependent on the speed of the media and drive. For a 4x 4.7GB DVD-RW disc, the test finishes in approximately 13 minutes. A.11. Ethernet What the test covers The Ethernet test only appears when the speed of a network device is not recognized by the test suite. This may be due to an unplugged cable or some other fault is preventing the proper detection of the connection speed. Please exit the test suite, check your connection, and run the test suite again when the device is properly connected. If the problem persists, contact your Red Hat support representative for assistance. The example below shows a system with two gigabit Ethernet devices, eth0 and eth1. Device eth0 is properly connected, but eth1 is not plugged in. The output of the ethtool command shows the expected gigabit Ethernet speed of 1000Mb/s for eth0: But on eth1 the ethtool command shows an unknown speed, which would cause the Ethernet test to be planned. A.12. Expresscard What the test covers The expresscard test looks for devices with both types of ExpressCard interfaces, USB and PCI Express (PCIe), and confirms that the system can communicate through both. ExpressCard slot detection is not as straightforward as detecting other devices in the system. ExpressCard was specifically designed to not require any kind of dedicated bridge device. It's merely a novel form factor interface that combines PCIe and USB. Because of this, there is no specific "ExpressCard slot" entry that we can see in the output of udev. We decided to schedule the test on systems that contain a battery, USB and PCIe interfaces, as we have seen no devices other than ExpressCard-containing laptops with this combination of hardware. What the test does The test first takes a snapshot of all the devices on the USB and PCIe buses using the lsusb and lspci commands. It then asks the tester how many ExpressCard slots are present in the system. The tester is asked to insert a card in one of the slots. The system scans the USB and PCIe buses and compares the results to the original lsusb and lspci output to detect any new devices. If a USB device is detected, the system asks you to remove the card and insert a card with a PCIe interface into the same slot. If a PCIe-based card is detected, the system asks you to remove it and insert a USB-based card into the same slot. If a card is inserted with both interfaces (a docking station card, for example), it fulfills both testing requirements for the slot at once. This procedure is repeated for all slots in the system. Preparing for the test You will need ExpressCard cards with USB and PCIe buses. This can be two separate cards or one card with both interfaces. Remove all ExpressCard cards before running the test. Executing the test The expresscard test is interactive. Run the following command and then select the appropriate Expresscard test name from the list that displays. It will prompt you to remove all ExpressCards, then ask for permission to load the PCI Express hotplug module (pciehp) if it is not loaded. PCIe hotplug capabilities are needed in order to add or remove PCIe-based ExpressCard cards while the system is running. the test will ask you for the number of ExpressCard slots in the system, followed by prompts to insert and remove cards with both types of interfaces (USB and PCIe) in any order. A.13. fingerprintreader What the test covers The fingerprintreader test is planned if the system has a built-in or plugin fingerprint reader. What the test does This test verifies that a fingerprint reader can scan, enroll, and verify the enrolled fingerprints in the fingerprint manager. Preparing for the test Ensure that the fingerprint reader is connected to the system. Executing the test This test is interactive. Run the following command and then select the appropriate fingerprintreader test name from the list that displays. The test will start detecting the fingerprint reader and then prompt you to place and scan your right index finger several times on the fingerprint reader until the enrollment completes. For verification, you will be prompted to scan the finger again for matching it with the enrolled fingerprint. Run time The test takes around a couple of minutes to complete until the reader finishes scanning and shows the enroll-complete state. A.14. firmware What the test covers The firmware test is supported to run on RHEL versions 8 and later for the x86_64 architecture systems using Unified Extensible Firmware Interface (UEFI) and the EFI System Resource Table (ESRT) for firmware management only. RHEL version supported RHEL 8 RHEL 9 What the test does The test runs the following subtests: Security Check subtest: The subtest checks if the host under test follows the security best practices by validating if the system and device firmware meet HSI-1 level standards . The test uses the fwupdagent security --force command to check the HSI-1 security attributes and capture the output. Update Service subtest: The subtest verifies if the host under test can download and install the firmware updates through Linux Vendor Firmware Service (LVFS). Success criteria The test passes only if all the HSI-1 attributes pass. The test passes if the system installs the LVFS update. Preparing for the test Ensure that the host under test is running RHEL 8.0 or later. Ensure that the system is booted in UEFI mode and not legacy BIOS mode. Executing the test This test is noninteractive. Run the following command and then select the appropriate firmware test name from the list that displays. Run time This test takes a minute to run. Any other mandatory or selected tests will add to the overall run time. A.15. fv_core The fv_core test is a wrapper that launches the FV guest and runs a core test on it. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported The first time you run any full-virtualization test, the test tool will need to obtain the FV guest files. The execution time of the test tool depends on the transfer speed of the FV guest files. For example, If FV guest files are located on the test server and you are using 1GbE or faster networking, it takes almost a minute or two to transfer approximately 300MB of guest files. If the files are retrieved from the CWE API, which occurs automatically when the guest files are not installed or found on the test server, the first runtime will depend on the transfer speed from the CWE API. When the guest files are available on the Host Under Test (HUT), they will be utilized for all the later runs of fv_* tests. Additional resources For more information about the test methodology and run times, see core . For more information about guest images, see Downloading guest images during test execution . A.16. fv_cpu_pinning CPU pinning is a method for dedicating system resources to a particular process. For example, an application may be locked to a particular logical core to reduce task switching. The virtualized (fv) CPU pinning method is similar except that pinning is done from a virtual CPU (vCPU) inside a KVM-based virtual machine to a physical core on the host machine. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported RHEL 8 RHEL 9 What the test covers The fv_cpu_pinning test validates that a vCPU of the guest virtual machine (VM) can be configured and pinned to a dedicated CPU of the host machine. This test is run on a host machine and is supported on RHEL 8 for feature qualification on RHEL 8 based RHV 4 releases. What the test does The fv_cpu_pinning test runs three subtests: Setup guest VM VCPU, Perform FV CPU Pinning, and verify FV CPU Pinning. The Setup guest VM VCPU subtest counts the number of logical cores of the host machine and isolates the last numbered core among those to dedicate it to the VM. The Perform FV CPU Pinning subtest further pins the vCPU of the VM to the CPU in the host machine. The test then verifies the pinning using commands, virt vcpupin and vcpuinfo , and checking the /proc directory information. Finally, the verify FV CPU Pinning uses the load test to verify if the guest VM vCPU workload is handled by the pinned CPU only. Preparing for the test There are no special requirements to run this test. Executing the test The fv_cpu_pinning test is non-interactive. Run the following command and then select the appropriate fv_cpu_pinning test name from the list that displays. Run time The fv_cpu_pinning test takes around 5 minutes to complete. Any other mandatory or selected tests will add to the overall run ti Additional resources For more information about guest images, see Downloading guest images during test execution . A.17. fv_live_migration What the test covers The fv_live_migration test checks the ability of a Host Under Test (HUT) to migrate a running virtual machine to a test server. RHEL version supported RHEL 8 RHEL 9 What the test does The test performs multiple subtests to complete the migration of a running virtual machine from the HUT to test server. Successful completion of the test requires all of the subtests to pass. The test checks if the HUT meets the requirements for migration, configures a virtual machine, and starts it on the HUT. It then migrates the running virtual machine from the HUT to the test server. After migration, verifies the virtual machine is no longer running on the HUT and is running on the test server. Finally, the test migrates the running virtual machine from the test server back to the HUT and checks again that the virtual machine is running on the HUT and no longer running on the test server. Preparing for the test Ensure that test server and HUT are running Red Hat Enterprise Linux 8, and the redhat-certification-hardware package is installed on the test server and HUT. Add the hostname in the respective /etc/hosts file in both test server and HUT and make the hostname alias for the fully qualified name as shown below: <IP address of HUT> <hostname of HUT> <IP address of test server> <hostname of test server> Executing the test The test is non-interactive. Currently, this test can be planned and executed manually via CLI only. On RHEL 8: On RHEL 9: Run time The test takes around 5 minutes to complete. However, the time might decrease or increase if the test server and HUT belong to the same or a different lab or network respectively. Additional resources For more information about guest images, see Downloading guest images during test execution . A.18. fv_memory The fv_memory test is a wrapper that launches the FV guest and runs a memory test on it. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported The first time you run any full-virtualization test, the test tool will need to obtain the FV guest files. The execution time of the test tool depends on the transfer speed of the FV guest files. For example, If FV guest files are located on the test server and you are using 1GbE or faster networking, it takes almost a minute or two to transfer approximately 300MB of guest files. If the files are retrieved from the CWE API, which occurs automatically when the guest files are not installed or found on the test server, the first runtime will depend on the transfer speed from the CWE API. When the guest files are available on the Host Under Test (HUT), they will be utilized for all the later runs of fv_* tests. Additional resources For more information about the test methodology and run times, see memory . For more information about guest images, see Downloading guest images during test execution . A.19. fv_pcie_storage_passthrough What the test covers The fv_pcie_storage_passthrough test is used to verify that control over a PCIe-based storage device, such as SAS and SATA, in the host machine can be transferred to a virtual machine. The test is supported on Red Hat Enterprise Linux 8 and must be run on a host machine. This test is planned automatically if the host supports device passthrough and has IOMMU enabled. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported RHEL 8 RHEL 9 What the test does The test performs multiple subtests to attach a host machine's HBA device to a virtual machine and then run the storage tests inside the virtual machine. Successful completion of the test requires all of the subtests to pass. The test validates if the PCIe device connected to the host machine can be assigned to appear natively in the guest virtual machine, configures the guest virtual machine to use the passthrough PCIe device, and launches the virtual machine and ensures the device is functioning as expected inside it. Preparing for the test Ensure that the host machine supports device passthrough and has IOMMU enabled. To configure, see Configuring a Host for PCI Passthrough . Note Do not run the test on the storage devices with the root partition of the host machine. Executing the test The test is non-interactive. Run the following command and then select the appropriate fv_pcie_storage_passthrough test name from the list that displays. Run time The test takes around 30 minutes to run. Additional resources For more information about guest images, see Downloading guest images during test execution . A.20. fv_usb_network_passthrough What the test covers The fv_usb_network_passthrough test is used to verify that control over a USB-attached network device in the host machine can be transferred to a virtual machine. The test is supported on Red Hat Enterprise Linux version 8 and above and must be run on a host machine. This test is planned automatically if the host machine supports device passthrough and has IOMMU enabled. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported RHEL 8 RHEL 9 What the test does The test performs multiple subtests to attach a host machine's USB device to a virtual machine and then run the network tests inside the virtual machine. Successful completion of the test requires all of the subtests to pass. The test validates if the USB device connected to the host machine can be assigned to appear natively in the guest virtual machine, configures the guest virtual machine to use the passthrough USB device, and launches the virtual machine and ensures the device is functioning as expected inside it. Preparing for the test Ensure that the USB device is plugged into the HUT that supports device passthrough and has IOMMU enabled. To configure, see Configuring a Host for PCI Passthrough . Ensure that the HUT has a minimum of two NIC and both networks are routable to the test server. Executing the test The test is non-interactive. Run the following command and then select the appropriate fv_usb_network_passthrough test name from the list that displays. Note If the test fails due to network bandwidth issues, then you might have to increase the CPUs and RAM allocated to the virtual machine to achieve higher bandwidth. Run time The test takes around 90 minutes to run, but will vary in length depending on the size and speed of the USB device and connection. Additional resources For more information about guest images, see Downloading guest images during test execution . A.21. fv_usb_storage_passthrough What the test covers The fv_usb_storage_passthrough test is used to verify that control over a USB-attached storage device, in the host machine can be transferred to a virtual machine. The test is supported on Red Hat Enterprise Linux 8 and must be run on a host machine. This test is planned automatically if the host supports device passthrough and has IOMMU enabled. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported RHEL 8 RHEL 9 What the test does The test performs multiple sub tests to attach a host machine's USB device to a virtual machine and then run the storage tests inside the virtual machine. Successful completion of the test requires all of the subtests to pass. The test validates if the USB device connected to the host machine can be assigned to appear natively in the guest virtual machine, configures the guest virtual machine to use the passthrough USB device, and launches the virtual machine and ensures the device is functioning as expected inside it. Preparing for the test Ensure that the USB device is plugged into the host machine that supports device passthrough and has IOMMU enabled. To configure, see Configuring a Host for PCI Passthrough section in the Red Hat Virtualization Administration Guide . Executing the test The test is non-interactive. Run the following command and then select the appropriate fv_usb_storage_passthrough test name from the list that displays. Run time The test takes around 90 minutes to run, but will vary in length depending on the size and speed of the USB device and connection. Additional resources For more information about guest images, see Downloading guest images during test execution . A.22. fv_pcie_network_passthrough What the test covers The fv_pcie_network_passthrough test is used to verify that control over a PCIe-based network device, such as NIC, LOMs, ALOMs, in the host machine can be transferred to a virtual machine. The test is supported on Red Hat Enterprise Linux version 8 and above, and must be run on a host machine. This test is planned automatically if the host machine supports device passthrough and has IOMMU enabled. Starting with RHEL 9.4, this test is supported to run on ARM systems. RHEL version supported RHEL 8 RHEL 9 What the test does The test performs multiple subtests to attach a host machine's network device to a virtual machine and then run the network tests inside the virtual machine. Successful completion of the test requires all of the subtests to pass. The test validates if the PCIe device connected to the host machine can be assigned to appear natively in the guest virtual machine, configures the guest virtual machine to use the passthrough PCIe device, and launches the virtual machine and ensures the device is functioning as expected inside it. Preparing for the test Ensure that the Host Under Test (HUT) supports device passthrough and has IOMMU enabled. To configure, see Configuring a Host for PCI Passthrough section in the Red Hat Virtualization Administration Guide . Ensure that the HUT has a minimum of two NIC and both networks are routable to the test server. Executing the test The test is non-interactive. Run the following command and then select the appropriate fv_pcie_network_passthrough test name from the list that displays. Note If the test fails due to network bandwidth issues, then you might have to increase the CPUs and RAM allocated to the virtual machine to achieve higher bandwidth. Run time The test takes around 30 minutes to run. Additional resources For more information about guest images, see Downloading guest images during test execution . A.23. infiniband connection What the test does The Infiniband Connection test runs the following subtests to ensure a baseline functionality using, when appropriate, the IP address selected from the dropdown at the onset of the test: Ping test Runs ping from the starting IP address of the device being tested on the HUT to the selected IP address of the test server. Rping test Runs rping on test server and HUT using the selected test server IP address, then compares results to verify it ran to completion. Rcopy test Runs rcopy on test server and HUT, sending a randomly generated file and comparing md5sums on test server and HUT to verify the successful transfer. Rdma-ndd service test Verifies stop, start and restart service commands function as expected. Opensm service test Verifies stop, start and restart service commands function as expected. LID verification test Verifies that the LID for the device is set and not the default value. Smpquery test Runs smpquery on test server using device and port for another verification the device/port has been registered with the fabric. ib_write_bw test Run ib_write_bw from the HUT to the selected IP address of the test server to test the InfiniBand write bandwidth and verify if it can reach the required bandwidth. The queue pair parameter has been adjusted during the bandwidth test to achieve a throughput closer to the line rate. ib_read_bw test Run ib_read_bw from the HUT to the selected IP address of the test server to test the InfiniBand read bandwidth and verify if it can reach the required bandwidth. The queue pair parameter has been adjusted during the bandwidth test to achieve a throughput closer to the line rate. ib_send_bw test Run ib_send_bw from the HUT to the selected IP address of the test server to test the InfiniBand send bandwidth and verify if it can reach the required bandwidth. The queue pair parameter has been adjusted during the bandwidth test to achieve a throughput closer to the line rate. Preparing for the test Ensure that the test server and HUT are separate machines, on the same fabric(s). Executing the test This is an interactive test. Run the following command and then select the appropriate infiniband connection test name from the list that displays. You will be prompted with a dropdown to select an IP address (an IP address of test server) in which to perform the tests. Select an IP address corresponding to a device on the same fabric of the HUT device you are running the test for. Table A.1. Manually adding and running the test Rate Type Command to manually add infiniband connection Test Command to Manually run infiniband connection Test Infiniband_QDR Infiniband_FDR Infiniband_EDR Infiniband_HDR Infiniband_NDR Infiniband_Socket_Direct Replace <device name> , <port number> , <net device> , and <test server IP addr> with the appropriate value. Run time This test takes less than 10 minutes to run. Additional resources For more information about InfiniBand and RDMA, see Understanding InfiniBand and RDMA technologies . A.24. intel_sst What the test covers The intel_sst test is a CPU frequency scaling test that exercises Intel's Speed Select Technology (SST) feature. Use this feature to customize the per-core performance to match CPU to workload, and to allocate per performance. This enables you to boost the performance of targeted applications at runtime. RHEL version supported RHEL 8 RHEL 9 What the test does The intel_sst test runs on SST-enabled systems only and supports the following features: Speed Select Base Freq (SST-BF) - Allows specific cores to run higher base frequency (P1) by reducing the base frequencies (P1) of other cores. Frequency Prioritization (SST-CP) - Allows specific cores to clock higher by reducing the frequency of cores running lower-priority software. The test checks if the above features are supported and configured on the system. Based on the result, it will execute only one subtest of the respective feature. Preparing for the test You must run this test on Intel chipset architectures only. To use the Intel(R) SST-BF functionality on a Red Hat Enterprise Linux (RHEL) based platform. Prerequisites : Enable the Intel(R) SST-BF feature in the BIOS Configure the kernel parameters - intel_idle.max_cstate=1 To use the Intel(R) SST-CP functionality on a Red Hat Enterprise Linux (RHEL) based platform. Prerequisites : Enable the Intel(R) SST-CP feature in the BIOS Configure the kernel parameters - intel_idle.max_cstate=1 intel_pstate=disable Executing the test This test is non-interactive. Run the following command and then select the appropriate intel_sst test name from the list that displays. Run time This test takes around 5 minutes to complete. Any other mandatory or selected tests will add to the overall run time. A.25. iPXE What the iPXE test covers The iPXE test is an interactive test that runs on x86 Red Hat Enterprise Linux (RHEL) systems. The system should boot in UEFI boot mode. If the efi directory exists the machine is running in the UEFI boot mode. Run the following command to determine if your machine is running in UEFI mode: RHEL version supported RHEL 8 RHEL 9 What the test does iPXE is the leading open source network boot firmware. It provides a full PXE implementation enhanced with additional features such as boot from HTTP, SAN, and Wireless Network. This test checks if the underlying NIC supports iPXE by using the HTTP boot. While performing the iPXE the test server does not return any bootable image. The boot screen will display an error could not boot , this is an expected error message. The test server will boot with the boot loader that is with the RHEL OS. Preparing for the test Ensure that the host under test is in the UEFI boot mode. iPXE tests the interface that it finds first, thus, on the host under test, ensure the interface which needs to be tested is plugged in. Ensure that httpd service is not running on the test server while running this test, as this test uses port 80 to communicate with the test server. Executing the test Run the following command and then select the appropriate iPXE test name from the list that displays. The ipxe test does not appear in the test plan, so you must use the following commands to plan and execute it manually, respectively. The test will first configure the Host Under Test (HUT) for iPXE test. It will save the MAC details of HUT, then it will create a new boot loader with ipxe binary and mark the boot loader as the boot. After that, it will prompt for a reboot, press Yes to continue. The test server will display waiting for a response after it sends the reboot command. The HUT will be rebooted to the new boot loader, which in turn loads the iPXE prompt and do a GET request to see if it is able to reach the test server. As it is just a GET requets the boot will fail and the system will fall back to the boot loader i.e. RHEL OS. The test server will continuously monitor the host under test to see if it has rebooted. After the reboot, the test will continue. The test will first revert the boot changes done for iPXE and then verify if iPXE boot was successful. It will compare the MAC address received from the GET request of iPXE boot with the MAC already saved. If the MAC matches the iPXE test is successful. Run time The test takes less than 5 minutes to run. Any other mandatory or selected tests will add to the overall run time. A.26. iwarp connection What the test does The IWarp Connection test runs the following subtests to ensure a baseline functionality using, when appropriate, the IP address selected from the dropdown at the onset of the test: Ping test - Runs ping from the starting IP address of the device being tested on the HUT to the selected IP address of the test server. Rping test - Runs rping on test server and HUT using the selected test server IP address, then compares results to verify it ran to completion. Rcopy test - Runs rcopy on test server and HUT, sending a randomly generated file and comparing md5sums on test server and HUT to verify successful transfer. Ethtool test - Runs the ethtool command passing in the detected net device of the roce device. ib_write_bw test Run ib_write_bw from the HUT to the selected IP address of the test server to test the IWrap write bandwidth and verify if it can reach the required bandwidth. ib_read_bw test Run ib_read_bw from the HUT to the selected IP address of the test server to test the IWrap read bandwidth and verify if it can reach the required bandwidth. ib_send_bw test Run ib_send_bw from the HUT to the selected IP address of the test server to test the IWrap send bandwidth and verify if it can reach the required bandwidth. Preparing for the test Ensure that the test server and HUT are separate machines, on the same fabric(s). Executing the test This is an interactive test. Run the following command and then select the appropriate iwrap connection test name from the list that displays. You will be prompted with a dropdown to select an IP address (an IP address of test server) in which to perform the tests. Select an IP address corresponding to a device on the same fabric of the HUT device you are running the test for. Table A.2. Manually adding and running the test Speed Type Command to manually add IWarpConnection Test Command to Manually run IWarpConnection Test 10GigiWarp 20GigiWarp 25GigiWarp 40GigiWarp 50GigiWarp 100GigiWarp 200GigiWarp Replace <device name> , <port number> , <net device> , and <test server IP addr> with the appropriate values. Run time This test takes less than 10 minutes to run. Additional resources For more information about InfiniBand and RDMA, see Understanding InfiniBand and RDMA technologies . A.27. kdump What the test covers The kdump test uses the kdump service to check that the system can capture a vmcore file after a crash, and that the captured file is valid. What the test does The test includes the following subtests: kdump with local : Using the kdump service, this subtest performs the following tasks: Crashes the host under test (HUT). Writes a vmcore file to the local /var/crash directory. Validates the vmcore file. kdump with NFS : Using the kdump service, this subtest performs the following tasks: Mounts the /var/rhcert/export filesystem on the HUT's /var/crash directory. This filesystem is shared over NFS from the test server. Crashes the HUT. Writes a vmcore file to the /var/crash directory. Validates the vmcore file. Preparing for the test Ensure that the HUT is connected to the test server before running the test. Ensure that the rhcertd process is running on the test server. The certification test suite prepares the NFS filesystem automatically. If the suite cannot set up the environment, the test fails. Executing the test Log in to the HUT. Run the kdump test: To use the rhcert-run command, perform the following steps: Run the rhcert-run command: # rhcert-run Select the kdump test. The test runs both subtests sequentially. To use the rhcert-cli command, choose whether to run both subtests sequentially, or specify a subtest: To run both subtests sequentially, use the following command: # rhcert-cli run --test=kdump --server=<test server's IP> To run the kdump with local subtest only, use the following command: # rhcert-cli run --test=kdump --device=local To run the kdump with NFS subtest only, use the following command: # rhcert-cli run --test=kdump --device=nfs --server=<test server's IP> Additionally, for the kdump with NFS test, execute the following command on the Test Server: # rhcertd start Wait for the HUT to restart after the crash. The kdump service shows several messages while it saves the vmcore file to the /var/crash directory. After the vmcore file is saved, the HUT restarts. Log in to the HUT after reboot, the rhcert suite will verify if the vmcore file exists, and if it is valid. If the file does not exist or is invalid, the test fails. If you are running the subtests sequentially, the kdump with NFS subtest starts after the validation of the vmcore file has completed. Run time The run time of the kdump test varies according to factors such as the amount of RAM in the HUT, the disc speed of the test server and the HUT, the network connection speed to the test server, and the time taken to reboot the HUT. For a 2013-era workstation with 8GB of RAM, a 7200 RPM 6Gb/s SATA drive, a gigabit Ethernet connection to the test server, and a 1.5 minute reboot time, a local kdump test can complete in about four minutes, including the reboot. The same 2013-era workstation can complete an NFS kdump test in about five minutes to a similarly equipped network test server. The supportable test will add about a minute to the overall run time. A.28. lid What the test covers The lid test is only valid for systems that have integrated displays and therefore have a lid that can be opened and closed. The lid is detected by searching the udev database for a device with "lid" in its name: What the test does The test ensures that the system can determine when its lid is closed and when it is open via parameters in udev, and that it can turn off the display's backlight when the lid is closed. Preparing for the test To prepare for the test, ensure that the power management settings do not put the system to sleep or into hibernation when the lid is closed. Make sure the lid is open before you start the test run. Executing the test The lid test is interactive. Run the following command and then select the appropriate lid test name from the list that displays. You will be asked if you are ready to begin the test, so answer Yes to continue. Close the lid when prompted, watching to see if the backlight turns off. You may have to look through the small space between the keyboard and lid when the laptop is closed to verify that the backlight has turned off. Answer Yes if the backlight turns off or No if the backlight does not turn off. Run time The lid test takes about 30 seconds to perform, essentially the time it takes to close the lid just enough to have the backlight turn off. Because this test is run on laptops, a suspend test must accompany the required supportable test for each run. The suspend test will add approximately 6 minutes to each test run, and supportable will add another minute. A.29. memory What the memory test covers The memory test is used to test system RAM. It does not test USB flash memory, SSD storage devices or any other type of RAM-based hardware. It tests main memory only. A memory per CPU core check has been added to the planning process to verify that the HUT meets the RHEL minimum requirement memory standards. It is a planning condition for several of the hardware certification tests, including the ones for memory, core, realtime, and all the full-virtualization tests. If the memory per CPU core check does not pass, the above-mentioned tests will not be planned automatically. However, these tests can be planned manually via CLI. RHEL version supported What the test does: The test uses the file /proc/meminfo to determine how much memory is installed in the system. Once it knows how much is installed, it checks to see if the system architecture is 32-bit or 64-bit. Then it determines if swap space is available or if there is no swap partition. The test runs either once or twice with slightly different settings depending on whether or not the system has a swap file: If swap is available, allocate more RAM to the memory test than is actually installed in the system. This forces the use of swap space during the run. Regardless of swap presence, allocate as much RAM as possible to the memory test while staying below the limit that would force out of memory (OOM) kills. This version of the test always runs. In both iterations of the memory test, malloc() is used to allocate RAM, the RAM is dirtied with a write of an arbitrary hex string (0xDEADBEEF), and a test is performed to ensure that 0xDEADBEEF is actually stored in RAM at the expected addresses. The test calls free() to release RAM when testing is complete. Multiple threads or multiple processes will be used to allocate the RAM depending on whether the process size is greater than or less than the amount of memory to be tested. Preparing for the test Install the correct amount of RAM in the system in accordance with the rules in the Policy Guide. Executing the test The memory test is non-interactive. Run the following command and then select the appropriate memory test name from the list that displays. Run time, bare-metal The memory test takes about 16 minutes to run on a 2013-era, single CPU, 6-core/12-thread 3.3GHz Intel-based workstation with 8GB of RAM running Red Hat Enterprise Linux, AMD64 and Intel 64. The test will take longer on systems with more RAM. The required supportable test will add about a minute to the overall run time. Run time, full-virt guest The fv_memory test takes slightly longer than the bare-metal version, about 18 minutes, to run in a guest. The added time is due to guest startup/shutdown activities and the required supportable test that runs in the guest. The required supportable test on the bare-metal system will add about a minute to the overall run time. The fv_memory test run times will not vary as widely from machine to machine as the bare-metal memory tests, as the amount of RAM assigned to our pre-built guest is always the same. There will be variations caused by the speed of the underlying real system, but the amount of RAM in use during the test won't change from machine to machine. Creating and Activating Swap for EC2 : Partners can perform the following steps to create and activate swap for EC2 A.29.1. memory_HBM What the memory_HBM tests cover The memory_HBM tests are used to test system High Bandwidth Memory (HBM) on systems with it present. One of the three possible tests is planned based on the HBM operating mode. If the system HBM is not supported a regular memory test is planned instead. RHEL version supported RHEL 8 RHEL 9 What the tests do The memory_HBM tests are memory tests specifically for systems with HBM present. Preparing for the test Ensure that the system HBM meets the requirements specified in the Policy Guide. Executing the test Run rhcert-cli plan . One of the memory_HBM tests will be planned if your HBM configuration meets the requirements and one of the following conditions: memory_HBM_only : There is no DIMM installed in the system memory_HBM_cache : HBM acts as cache to the DIMM memory_HBM_flat : DIMM and HBM are available as a total amount of memory To run the test, use the command rhcert-cli run --test . For example, rhcert-cli run --test hwcert/memory_HBM_cache runs the memory_HBM_cache test. Follow the instructions of the test. Use rhcert-print to check the result. To save the result use rhcert-save . A.29.2. memory_CXL What the test covers The memory_CXL test evaluates systems equipped with Type 3 Compute Express Link (CXL) devices. It verifies the functionality of CXL memory devices, ensuring proper integration with traditional system memory. The test identifies connected CXL devices, reconfigures them as system RAM if needed, and logs memory bandwidth performance. Note Run the memory test after completing the memory_CXL test. RHEL version supported RHEL 9.3 and later What the test does The memory_CXL test detects and validates installed CXL memory devices. Use the following command to check for the presence of CXL devices: This command lists all PCI devices in the system and identifies any CXL memory devices. If a CXL memory device is detected but not configured as system RAM, reconfigure it with the following commands: Preparing for the test Before running the test, ensure the following: The system is equipped with CXL memory modules that meet the requirements in the Policy Guide. The system firmware recognizes the installed CXL memory. The CXL SPM (Single Port Memory) option is enabled in the BIOS. Executing the test The test is non-interactive. Run the following command and select the appropriate memory_CXL test name from the displayed list. Run time The test usually takes 5-10 minutes to complete, depending on the system configuration and the amount of CXL memory. Ensure the system remains stable and operational during the test. A.30. network What the test covers The network test checks devices that transfer data over a TCP/IP network. The test can check multiple connection speeds and bandwidths of both wired and wireless devices based on the corresponding test designed for it, as listed in the following table: Different tests under Network test Ethernet test Description 1GigEthernet The network test with added speed detection for 1 gigabit Ethernet connections. 10GigEthernet The network test with added speed detection for 10 gigabit Ethernet connections. 20GigEthernet The network test with added speed detection for 20 gigabit Ethernet connections. 25GigEthernet The network test with added speed detection for 25 gigabit Ethernet connections. 40GigEthernet The network test with added speed detection for 40 gigabit Ethernet connections. 50GigEthernet The network test with added speed detection for 50 gigabit Ethernet connections. 100GigEthernet The network test with added speed detection for 100 gigabit Ethernet connections. 200GigEthernet The network test with added speed detection for 200 gigabit Ethernet connections. Ethernet If the Ethernet test is listed in your local test plan, it indicates that the test suite did not recognize the speed of that device. Check the connection before attempting to test that particular device. Wireless test Description WirelessG The network test with added speed detection for 802.11g wireless Ethernet connections. WirelessN The network test with added speed detection for 802.11n wireless Ethernet connections. WirelessAC The network test with added speed detection for 802.11ac wireless Ethernet connections. WirelessAX (Superseded by WiFi6) The network test with added speed detection for 802.11ax wireless Ethernet connections. WiFi6 The network test with added speed detection for 802.11ac wireless Ethernet connections. WiFi6E The network test with added speed detection for 802.11ac wireless Ethernet connections. What the test does The test runs the following subtests to gather information about all the network devices: The bounce test on the interface is conducted using nmcli conn up and nmcli conn down commands. If the root partition is not NFS or iSCSI mounted, the bounce test is performed on the interface. Additionally, all other interfaces that will not be tested are shut down to ensure that traffic is routed through the interface being tested. If the root partition is NFS or iSCSI mounted, the bounce test on the interface responsible for the iSCSI or NFS connection is skipped, and all other interfaces, except for the one handling the iSCSI or NFS connection, will be shut down. A test file gets created at location /dev/urandom , and its size is adjusted with the speed of your NIC. TCP and UDP testing - The test uses iperf tool to: Test TCP latency between the test server and host under test. The test checks if the system runs into any OS timeouts and fails if it does. Test the bandwidth between the test server and the host under test. For wired devices, it is recommended that the speed is close to the theoretical maximum. Test UDP latency between the test server and host under test. The test checks if the system runs into any OS timeouts and fails if it does. File transfer testing - The test uses SCP to transfer a file from the host under test to the remote system or test server and then transfers it back to the host under test to check if the transfer works properly. ICMP (ping) test - The script causes a ping flood at the default packet size to ensure nothing in the system fails (the system should not restart or reset or anything else that indicates the inability to withstand a ping flood). 5000 packets are sent and a 100% success rate is expected. The test retries 5 times for an acceptable success rate. Finally, the test brings all interfaces back to their original state (active or inactive) when the test is executed. Preparing for testing wired devices You can test as many network devices as you want in each test run. Before you begin: Ensure to connect each device at its native (maximum) speed, or else the test fails. Ensure that the test server is up and running. Ensure that each network device has an IP address assigned either statically or dynamically via DHCP. Ensure that multiple firewall ports are open, for the iperf tool to run TCP and UDP subtests. Note By default, ports 52001-52101 are open. If you want to change the default ports, update the iperf-port and total-iperf-ports values in the /etc/rhcert.xml configuration file. Example: <server listener-port="8009" iperf-port="52001" total-iperf-ports="100"> If the firewall ports are not open, the test prompts to open the firewall ports during the test run. Partitionable networking The test checks if any of the network devices support partitioning, by checking the data transfer at full speed and the partitioning function. Running the test based on the performance of NIC: If NIC runs at full speed while partitioned then, configure a partition with NIC running at its native speed and Perform the network test in that configuration. If NIC does not run at full speed while partitioned then, run the test twice - first time, run it without partitioning to see the full-speed operation, and the second time, run it with partitioning enabled to see the partitioning function. Note Red Hat recommends selecting either 1Gb/s or 10Gb/s for your partitioned configuration so that it conforms to the existing network speed tests. Preparing for testing wireless Ethernet devices Based on the wireless card that is being tested, the wireless access point that you connect to must have the capability to perform WirelessG, WirelessN, WirelessAC, WirelessAX, WiFi6, and WiFi6E network tests. Executing the test The network test is non-interactive. Run the following command and then select the appropriate network test name from the list that displays. Table A.3. Manually adding and running the test Speed Type Command to manually add Ethernet Test Command to Manually run Ethernet Test 1GigEthernet 10GigEthernet 20GigEthernet 25GigEthernet 40GigEthernet 50GigEthernet 100GigEthernet 200GigEthernet 400GigEthernet Replace <device name> and <test server IP addr> with the appropriate value. Run time The network test takes about 2 minutes to test each PCIe-based, gigabit, wired Ethernet card, and the required Supportable test adds about a minute to the overall run time. Additional resources For more information about the remaining test functionality, see Ethernet test . A.31. NetworkManageableCheck What the test covers The NetworkManageableCheck test runs for all the network interfaces available in the system. RHEL version supported RHEL 8 RHEL 9 What the test does The test comprises two subtests that perform the following tasks: Check the BIOS device name to confirm that the interface follows the terminology set by the firmware. Note BIOS device name validation runs only on x86 systems. Check if the Network Manager manages the interface, for evaluating current network management status. Executing the test The NetworkManageableCheck test is mandatory. It is planned and executed with a self-check and supportable test to ensure thorough examination and validation of network interfaces. Run time The test takes around 1 minute to complete. However, the duration of the test varies depending on the specifics of the system and the number of interfaces. A.32. NVMe over Fabric tests NVMe over Fabrics, also known as NVMe-oF and non-volatile memory express over fabrics, is a protocol specification designed to connect hosts to storage across a network fabric using the NVMe protocol. The protocol is designed to enable data transfers between a host computer and a target solid-state storage device or system over a network - accomplished through NVMe message-based commands. Data transfers can be transferred through methods such as Ethernet or InfiniBand. A.32.1. nvme_infiniband What the test covers The nvme_infiniband test verifies the access and use of NVMe SSD drives over the RDMA network. The Host Under Test is configured as an NVMe client, and the lab agent system is configured as an NVMe target. RHEL version supported RHEL 8 RHEL 9 What the test does The test runs multiple subtests to: Verify that necessary kernel modules are loaded, and that the NVMe client is connected to the NVMe target. Establish and confirm the connection between NVMe target and client by running discovery, disconnect, and connect commands. Detect the network interface used to connect to the storage device in the target system and accordingly run one type of test from each of the STORAGE and infiniband connection tests. For the NVMe over Fabric storage test, the test is executed on the NVMe client system but the NVMe device physically resides on the NVMe target host. Both NVMe client and target hosts communicate using the RDMA protocol. Preparing for the test Before you begin the test, ensure that: The NVMe target and NVMe client systems are configured properly and are part of the RDMA network. The NVMe client and NVMe target are running the same RHEL version. Otherwise, communication between the NVMe client on RHEL 9.0 and the NVMe target on RHEL 8.5 will result in an error similar to Invalid MNAN value 1024 attempting nvme connect . Executing the test The test is non-interactive. Currently, this test can be planned and executed via CLI only. Run time This test takes about 15 minutes to run. Any other mandatory or selected tests will add to the overall run time. A.32.2. nvme_iwarp What the test covers The nvme_iwarp test verifies the access and use of NVMe SSD drives over the RDMA network. The test is supported to run on RHEL 8. The Host Under Test is configured as an NVMe client, and the lab agent system is configured as an NVMe target. RHEL version supported RHEL 8 RHEL 9 What the test does The test runs multiple subtests to: Verify that necessary kernel modules are loaded, and that the NVMe client is connected to the NVMe target. Establish and confirm the connection between NVMe target and client by running discovery, disconnect, and connect commands. Detect the network interface used to connect to the storage device in the target system and accordingly run one type of test from each of the STORAGE and iwarp connection tests. For the NVMe over Fabric storage test, the test is executed on the NVMe client system but the NVMe device physically resides on the NVMe target host. Both NVMe client and target hosts communicate using the RDMA protocol. Preparing for the test Before you begin the test, ensure that the: NVMe client is running RHEL 8.x or 9.x. NVMe target and NVMe client systems are configured properly and are part of the RDMA network. NVMe client and NVMe target are running the same RHEL version, otherwise, communication between NVMe client on RHEL 9.0 to NVMe target on RHEL 8.5 will result in an error, Invalid MNAN value 1024 attempting nvme connect . Executing the test The test is non-interactive. Currently, this test can be planned and executed via CLI only. Run time This test takes about 15 minutes to run. Any other mandatory or selected tests will add to the overall run time. A.32.3. nvme_omnipath What the test covers The nvme_omnipath test verifies the access and use of NVMe SSD drives over the RDMA network. The test is supported to run on RHEL 8. The Host Under Test is configured as an NVMe client, and the lab agent system is configured as an NVMe target. RHEL version supported RHEL 8 RHEL 9 What the test does The test runs multiple subtests to: Verify that necessary kernel modules are loaded, and that the NVMe client is connected to the NVMe target. Establish and confirm the connection between NVMe target and client by running discovery, disconnect, and connect commands. Detect the network interface used to connect to the storage device in the target system and accordingly run one type of test from each of the STORAGE and omnipath connection tests. For the NVMe over Fabric storage test, the test is executed on the NVMe client system but the NVMe device physically resides on the NVMe target host. Both NVMe client and target hosts communicate using the RDMA protocol. Preparing for the test Before you begin the test, ensure that the: NVMe client is running RHEL 8.x or 9.x. NVMe target and NVMe client systems are configured properly and are part of the RDMA network. NVMe client and NVMe target are running the same RHEL version, otherwise, communication between NVMe client on RHEL 9.0 to NVMe target on RHEL 8.5 will result in an error, Invalid MNAN value 1024 attempting nvme connect . Executing the test The test is non-interactive. Currently, this test can be planned and executed via CLI only. Run time This test takes about 15 minutes to run. Any other mandatory or selected tests will add to the overall run time. A.32.4. nvme_roce What the test covers The nvme_roce test verifies the access and use of NVMe SSD drives over the RDMA network. The test is supported to run on RHEL 8. The Host Under Test is configured as an NVMe client, and the lab agent system is configured as an NVMe target. RHEL version supported RHEL 8 RHEL 9 What the test does The test runs multiple subtests to: Verify that necessary kernel modules are loaded, and that the NVMe client is connected to the NVMe target. Establish and confirm the connection between NVMe target and client by running discovery, disconnect, and connect commands. Detect the network interface used to connect to the storage device in the target system and accordingly run one type of test from each of the STORAGE and RoCE connection tests. For the NVMe over Fabric storage test, the test is executed on the NVMe client system but the NVMe device physically resides on the NVMe target host. Both NVMe client and target hosts communicate using the RDMA protocol. Preparing for the test Before you begin the test, ensure that the: NVMe client is running RHEL 8.x or 9.x. NVMe target and NVMe client systems are configured properly and are part of the RDMA network. NVMe client and NVMe target are running the same RHEL version, otherwise, communication between NVMe client on RHEL 9.0 to NVMe target on RHEL 8.5 will result in an error, Invalid MNAN value 1024 attempting nvme connect . Executing the test The test is non-interactive. Currently, this test can be planned and executed via CLI only. Run time This test takes about 15 minutes to run. Any other mandatory or selected tests will add to the overall run time. A.32.5. nvme_tcp What the test covers The nvme_tcp test verifies the access and use of NVMe SSD drives over the TCP network. The test is currently available as a Technology Preview and is supported to run on RHEL 8. The Host Under Test is configured as an NVMe client, and the lab agent system is configured as an NVMe target. RHEL version supported RHEL 8 RHEL 9 What the test does The test runs multiple subtests to: Verify that necessary kernel modules are loaded, and that the NVMe client is connected to the NVMe target. Establish and confirm the connection between NVMe target and client by running discovery, disconnect, and connect commands. Detect the network interface used to connect to the storage device in the target system and accordingly run one type of test from each of the STORAGE and NETWORK tests. For the NVMe over Fabric storage test, the test is executed on the NVMe client system but the NVMe device physically resides on the NVMe target host. Both NVMe client and target hosts communicate using the TCP protocol. Preparing for the test Before you begin the test, ensure that the: NVMe client is running RHEL 8.x. NVMe target and NVMe client systems are configured properly and are part of the RDMA network. Note The default TCP port number for NVMe over TCP is 8009. The default TCP port number for NVMe over RDMA is 4420. You can use any TCP port number that does not conflict with other current applications. If there is a port conflict, then reconfigure the NVMe port number 8009 with a different TCP port number. Executing the test The test is non-interactive. Currently, this test can be planned and executed via CLI only. Run time This test takes about 10 minutes to run. Any other mandatory or selected tests will add to the overall run time. A.33. omnipath connection What the test does The Omnipath Connection test runs the following subtests to ensure a baseline functionality using, when appropriate, the IP address selected from the dropdown at the onset of the test: Ping test - Runs ping from the starting IP address of the device being tested on the HUT to the selected IP address of the test server. Rping test - Runs rping on test server and HUT using the selected test server IP address, then compares results to verify it ran to completion. Rcopy test - Runs rcopy on test server and HUT, sending a randomly generated file and comparing md5sums on test server and HUT to verify successful transfer. Rdma-ndd service test - Verifies stop, start and restart service commands function as expected. Opensm service test - Verifies stop, start and restart service commands function as expected. LID verification test - Verifies that the LID for the device is set and not the default value. Link speed test - Verifies that the detected link speed is 100Gb. Smpquery test - Runs spmquery on test server using device and port for another verification the device/port has been registered with the fabric. ib_write_bw test Run ib_write_bw from the HUT to the selected IP address of the test server to test the Omnipath write bandwidth and verify if it can reach the required bandwidth. The queue pair parameter has been adjusted during the bandwidth test to achieve a throughput closer to the line rate. ib_read_bw test Run ib_read_bw from the HUT to the selected IP address of the test server to test the Omnipath read bandwidth and verify if it can reach the required bandwidth. The queue pair parameter has been adjusted during the bandwidth test to achieve a throughput closer to the line rate. ib_send_bw test Run ib_send_bw from the HUT to the selected IP address of the test server to test the Omnipath send bandwidth and verify if it can reach the required bandwidth. The queue pair parameter has been adjusted during the bandwidth test to achieve a throughput closer to the line rate. Preparing for the test Ensure that the test server and HUT are separate machines, on the same fabric. You need to install opa-basic-tools on the test server from the Downloads section of Red Hat customer portal web page. Executing the test This is an interactive test. Run the following command and then select the appropriate omnipath connection test name from the list that displays. You will be prompted with a dropdown to select an IP address (an IP address of test server) in which to perform the tests. Select an IP address corresponding to a device on the same fabric of the HUT device you are running the test for. Manually adding and running the test Use the following command to add the OmnipathConnectionTest manually: Use the following command to manually run the OmnipathConnectionTest: Run time This test takes less than 10 minutes to run. Additional resources For more information about InfiniBand and RDMA, see Understanding InfiniBand and RDMA technologies . A.34. power_stop What the test covers The Suspend-to-Idle state which, when enabled, allows a processor to be in the deepest idle state while the system is suspended. It freezes user space and puts all I/O devices into low-power states, thereby saving power consumption on systems. The power_stop test is designed to verify if enabling these Stop (or idle) states work as expected on a ppc64le CPU architecture machine, specifically on Power9 based systems. RHEL version supported RHEL 8 RHEL 9 What the test does The test uses the lsprop command to collect information of all the idle-stop states that a particular system supports, and cpupower command to enable and disable those states. The test observes the usage and duration counter increment of each cpu idle state to affirm if it's enabled or not. Success Criteria : Change in the usage and duration parameter values for the stop state before and after enabling it. PASS: If every state increases its counter parameter values WARN: If any one state fails to increase its counter parameter values FAIL: If any of the state does not increases its counter REVIEW: any other unknown issue Preparing for the test This test is planned automatically if the Host Under Test (HUT) meets the following requirements: The HUT is running one of the supported RHEL versions. The underlying architecture is ppc64le The CPU Model is POWER9 Note This test is not supported and will fail when executed on any other RHEL version and architecture. Executing the test The test is non-interactive. Run the following command and then select the appropriate power_stop test name from the list that displays. Run time The test takes less than five minutes to finish, but can vary depending on the number of CPU Idle Stop states. A.35. profiler The profiler test collects the performance metric from the Host Under Test and determines whether the metrics are collected from the software or the hardware Performance Monitoring Unit (PMU) supported by the RHEL Kernel. If the metrics are hardware-based, the test further determines if the PMU includes per core counters only or includes per package counters also. The profiler test is divided into three tests, profiler_hardware_core , profiler_hardware_uncore , and profiler_software . A.35.1. profiler_hardware_core What the test covers The profiler_hardware_core test collects performance metrics using hardware-based per core counters by checking the cycle events. The core events measure the functions of a processor core, for example, the L2 cache. What the test does The test is planned if core hardware event counters are found and locate the cpu*cycles files in the /sys/devices directory by running the find /sys/devices/* -type f -name 'cpu*cycles' command. The test executes multiple commands to accumulate the sample of 'cycle' events, checks if the 'cpu cycle' event was detected, and checks if the samples were collected. Note This test is not intended to be exhaustive and, it does not test every possible core counter-event that a given processor may or may not have. Preparing for the test There are no special requirements to run this test. Executing the test The test is non-interactive. Run the following command and then select the appropriate profiler_hardware_core test name from the list that displays. Run time The test takes approximately 30 seconds. Any other mandatory or selected tests will add to the overall run time. A.35.2. profiler_hardware_uncore What the test covers The profiler_hardware_uncore test collects performance metrics using hardware-based package-wide counters. The uncore events measure the functions of a processor that are outside the core but are inside the package, for example, a memory controller. RHEL version supported RHEL 8 RHEL 9 What the test does The test is planned if uncore hardware event counters are found. The test passes if it finds any uncore events and collects statistics for any one event. The test fails if it finds uncore events but does not collect statistics as those events are not supported. The test executes multiple commands to collect the list of uncore events and the uncore events statistics. Note This test is not intended to be exhaustive and, it does not test every possible uncore counter-event that a given processor may or may not have. Preparing for the test There are no special requirements to run this test. Executing the test The test is non-interactive. Run the following command and then select the appropriate profiler_hardware_uncore test name from the list that displays. Run time The test takes approximately 30 seconds. Any other mandatory or selected tests will add to the overall run time. A.35.3. profiler_software What the test covers The profiler_software test collects performance metrics using software-based counters by checking the cpu_clock events. Software counters can be certified using this test. However, for customers with high-performance requirements, this test can be limiting. What the test does The test is planned if no core hardware event counters are found. The test executes multiple commands to accumulate the sample of cpu-clock events, checks if the cpu-clock event was detected, and checks if the samples were collected. Preparing for the test There are no special requirements to run this test. Executing the test The test is non-interactive. Run the following command and then select the appropriate profiler_software test name from the list that displays. Run time The test takes approximately 30 seconds. Any other mandatory or selected tests will add to the overall run time. A.36. realtime What the test covers The realtime test covers testing of systems running Red Hat Enterprise Linux for Real Time with two sets of tests: one to find system management mode-based execution delays and one to determine the latency of servicing timer events. Additionally, for RHEL 8 and RHEL 9, the test ensures that there are cores reserved for housekeeping instead of fully utilizing all of them. Note The test is only planned on systems running Red Hat kernel. What the test does The first portion of the test loads a special kernel module named hwlat_detector.ko . This module creates a kernel thread that polls the Timestamp Counter Register (TSC), looking for intervals between consecutive reads which exceed a specified threshold. Gaps in consecutive TSC reads mean that the system was interrupted between the reads and executed other code, usually System Management Mode (SMM) code defined by the system BIOS. The second part of the test starts a program named cyclictest , which starts a measurement thread per CPU, running at a high realtime priority. These threads have a period (100 microseconds) where they perform the following calculation: get a timestamp (t1) sleep for period get a second timestamp (t2) latency = t2 - (t1 + period) goto 1 Note The latency is the time difference between the theoretical wakeup time (t1+period) and the actual wakeup time (t2). Each measurement thread tracks minimum, maximum, and average latency and reports each datapoint. While the cyclictest runs, rteval starts a pair of system loads, one being a parallel linux kernel compile and the other being a scheduler benchmark called hackbench . When the run is complete, rteval performs a statistical analysis of the data points, calculating mean, mode, median, variance, and standard deviation. Additionally, for RHEL 8 and RHEL 9, the test checks if there are isolated CPUs configured in /sys/devices/system/cpu/isolated and the tuned version includes support for the initial auto setup of isolated_cores (version greater or equal to 2.19.0.). It also checks if the realtime tuned profile is active. If any check fails, the test gives a warning before continuing to execute. Preparing for the test Install and boot the realtime kernel-rt kernel before adding the system to the certification. The command will detect that the running kernel is realtime and will schedule the realtime test to run. For RHEL 8 and RHEL 9, with tuned version greater or equal to 2.19.0, select the tuned profile as realtime and reboot the system. Note If you need Realtime tuning assistance, then you must provide Red Hat access to your system to allow making required changes, including modifying BIOS. Note Newly installed kernels inherit the kernel command-line parameters from your previously configured kernels. For more information, see Changing kernel command-line parameters for all boot entries . Executing the test The realtime test is non-interactive. Run the following command and then select the appropriate realtime test name from the list that displays. The test will only appear when the system is running the rt-kernel. Run time The system management mode runs for two hours, and the timer event analysis runs for twelve hours on all machines. The required supportable test will add about a minute to the overall run time. A.37. reboot What the test covers The reboot test confirms the ability of a system to reboot when prompted. It is not required for certification at this time. What the test does The test issues a shutdown -r 0 command to immediately reboot the system with no delay. Preparing for the test Ensure that the system can be rebooted before running this test by closing any running applications. Executing the test The reboot test is interactive. Run the following command and then select the appropriate reboot test name from the list that displays. You will be asked Ready to restart? when you reach the reboot portion of the test program. Answer y if you are ready to perform the test. The system will reboot and after coming back up, the test server will verify that the reboot completed successfully. A.38. RoCE connection What the test does The RoCE Connection test runs the following subtests to ensure a baseline functionality using, when appropriate, the IP address selected from the dropdown at the onset of the test: Ping test - Runs ping from the starting IP address of the device being tested on the HUT to the selected IP address of the test server. Rping test - Runs rping on test server and HUT using the selected test server IP address, then compares results to verify it ran to completion. Rcopy test - Runs rcopy on test server and HUT, sending a randomly generated file and comparing md5sums on test server and HUT to verify successful transfer. Ethtool test - Runs the ethtool command passing in the detected net device of the roce device. ib_write_bw test Run ib_write_bw from the HUT to the selected IP address of the test server to test the RoCE write bandwidth and verify if it can reach the required bandwidth. ib_read_bw test Run ib_read_bw from the HUT to the selected IP address of the test server to test the RoCE read bandwidth and verify if it can reach the required bandwidth. ib_send_bw test Run ib_send_bw from the HUT to the selected IP address of the test server to test the RoCE send bandwidth and verify if it can reach the required bandwidth. Preparing for the test Ensure that the test server and HUT are separate machines, on the same fabric(s). Executing the test This is an interactive test. Run the following command and then select the appropriate RoCE connection test name from the list that displays. You will be prompted with a dropdown to select an IP address (an IP address of test server) in which to perform the tests. Select an IP address corresponding to a device on the same fabric of the HUT device you are running the test for. Table A.4. Manually adding and running the test Speed Type Command to manually add RoCEConnection Test Command to Manually run RoCEConnection Test 10GigRoCE 20GigRoCE 25GigRoCE 40GigRoCE 50GigRoCE 100GigRoCE 200GigRoCE 400GigRoCE Replace <device name> , <port number> , <net device> , and <test server IP addr> with the appropriate values. Additional resources For more information about InfiniBand and RDMA, see Understanding InfiniBand and RDMA technologies . A.39. SATA What the SATA test covers There are many different kinds of persistent on-line storage devices available in systems today. What the test does The SATA test is designed to test anything that reports an ID_TYPE of "disk" in the udev database. This test is for SATA drives . The hwcert/storage/SATA test gets planned if: the controller name of any disk mentions SATA , or the lsscsi transport for the host that disks are connected to mentions SATA If the above two criteria do not meet, then the storage test would get planned for the detected device. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.40. SATA_SSD What the SATA_SSD test covers This test will run if it determines the storage unit of interest is SSD and its interface is SATA. What the SATA_SSD test does The test finds the SCSI storage type and identifies connected storage interface on the location more /sys/block/sdap/queue/rotational . The test is planned if the rotational bit is set to zero for SSD. Following are the device parameter values that would be printed as part of the test: logical_block_size - Used to address a location on the device physical_block_size - Smallest unit on which the device can operate minimum_io_size - Minimum unit preferred for random input/output of device's optimal_io_size - It is the preferred unit of device's for streaming input/output alignment_offset - It is offset value from the underlying physical alignment Additional resources For more information about what the test does and preparing for the test see STORAGE . A.41. M2_SATA What the M2_SATA test covers This test will run if it determines the interface is SATA and attached through an M2 connection. Manually adding and running the test Use the following command to manually add the M2_SATA test: Following are the device parameter values that would be printed as part of the test: logical_block_size - Used to address a location on the device physical_block_size - Smallest unit on which the device can operate minimum_io_size - Minimum unit preferred for random input/output of device's optimal_io_size - It is the preferred unit of device's for streaming input/output alignment_offset - It is offset value from the underlying physical alignment Additional resources For more information about what the test does and preparing for the test see STORAGE . A.42. U2_SATA What the U2_SATA test covers This test will run if it determines the interface is SATA and attached through a U2 connection. Manually adding and running the test Use the following command to manually add the U2_SATA test: Following are the device parameter values that would be printed as part of the test: logical_block_size - Used to address a location on the device physical_block_size - Smallest unit on which the device can operate minimum_io_size - Minimum unit preferred for random input/output of device's optimal_io_size - It is the preferred unit of device's for streaming input/output alignment_offset - It is offset value from the underlying physical alignment Additional resources For more information about what the test does and preparing for the test see STORAGE . A.43. SAS What the SAS test covers There are many different kinds of persistent on-line storage devices available in systems today. What the test does The SAS test is designed to test anything that reports an ID_TYPE of "disk" in the udev database. This test is for SAS drives . The hwcert/storage/SAS test gets planned if: the controller name of any disk should mention SAS , or the lsscsi transport for the host that disks are connected to should mentions SAS If the above two criteria do not meet, then the storage test would get planned for the detected device. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.44. SAS_SSD What the SAS_SSD test covers This test will run if it determines the storage unit of interest is SSD and its interface is SAS. What the SAS_SSD test does The test finds the SCSI storage type and identifies connected storage interface on the location more /sys/block/sdap/queue/rotational. The test is planned if the rotational bit is set to zero for SSD. Following are the device parameter values that are printed as part of the test: logical_block_size - Used to address a location on the device physical_block_size - Smallest unit on which the device can operate minimum_io_size - Minimum unit preferred for random input/output of device's optimal_io_size - It is the preferred unit of device's for streaming input/output alignment_offset - It is offset value from the underlying physical alignment Additional resources For more information about what the test does and preparing for the test see STORAGE . A.45. PCIE_NVMe What the PCIe_NVMe test covers This test runs if the interface is NVMe and the device is connected through a PCIE connection. RHEL version supported RHEL 8 RHEL 9 What the PCIe_NVMe test does This test gets planned if the logical device host name string contains " nvme[0-9] " Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.46. M2_NVMe What the M2_NVMe test covers This test runs if the interface is NVMe and the device is connected through a M2 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the M2_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.47. U2_NVMe What the U2_NVMe test covers This test runs if the interface is NVMe and the device is connected through a U2 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the U2_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment Additional resources For more information about what the test does and preparing for the test see STORAGE . A.48. U3_NVMe What the U3_NVMe test covers This test runs if the interface is NVMe and the device is connected through a U3 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the U3_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.49. E3_NVMe What the E3_NVMe test covers This test runs if the interface is NVMe and the device is connected through a E3 connection. RHEL version supported RHEL 8 RHEL 9 Manually adding and running the test To manually add and run the E3_NVMe test, use the following command: Following are the device parameter values that are printed as a part of the test: logical_block_size - Used to address a location on the device. physical_block_size - Smallest unit on which the device can operate. minimum_io_size - Minimum unit preferred for random input or output of the device. optimal_io_size - Preferred unit of the device for streaming input or output operations. alignment_offset - Offset value from the underlying physical alignment. Additional resources For more information about what the test does and preparing for the test see STORAGE . A.50. NVDIMM What the NVDIMM test covers This test operates like any other SSD non-rotational storage test and identifies the NVDIMM storage devices What the test does The test gets planned for storage device if: There exist namespaces (non-volatile memory devices) for that disk device reported by "ndctl list" It reports the "DEVTYPE" of the sda is equal to 'disk' Following are the device parameter values that would be printed as part of the test: logical_block_size - Used to address a location on the device physical_block_size - Smallest unit on which the device can operate minimum_io_size - Minimum unit preferred for random input/output of device's optimal_io_size - It is the preferred unit of device's for streaming input/output alignment_offset - It is offset value from the underlying physical alignment Additional resources For more information about what the test does and preparing for the test see STORAGE . A.51. SR-IOV What the test does The SR-IOV test certifies the NIC cards installed on the host under test (HUT) and the test server by checking if the SR-IOV functionality is supported on the cards. The test is based on the Single Root I/O Virtualization (SR-IOV) technology that enables a single physical hardware device to be shared among multiple virtual machines or containers, improving the network performance and efficiency of I/O operations. RHEL version supported RHEL 9 What the test covers The test checks if the NIC card installed on the x86_64 system supports SR-IOV technology. Preparing for the test Before running the test: Install the NIC card on both the HUT and test server. Ensure that the NIC card undergoing testing has two ports each. Ensure that the test server and HUT are connected through two direct Ethernet cables back to back for the test to pass successfully. Ensure to enable Intel VT-d or AMD IOMMU and SR-IOV Global Enable parameters in your system's BIOS. For more details, refer to the system's BIOS configuration manual or any other methods provided by the manufacturer. Ensure to enable SRIOV for the NIC card under Device Settings. For more details, refer to the NIC card configuration manual or any other methods provided by the manufacturer. Install the following RPMs on both HUT and the test server, in the same order as mentioned. Enable the epel repository and then install beakerlib: SR-IOV Configure Hugepage Ensure to reboot the system after configuration completes. Ensure to provision the HUT and test server. See Configuring the systems and running tests by using Cockpit or Configuring the systems and running tests by using CLI . Executing the test On HUT: Edit the config file generated at this path according to your system's configuration, /etc/redhat-certification/sriov/nic_cert.conf Note You must update and keep a backup of the config file every time after running the provision command. Run the test While the test is executed on the HUT, a corresponding test run is also executed in auto mode on the test server. The test is non-interactive and runs in the background. Run time The test takes around 2 hours to run. Any other mandatory or selected tests will add to the overall run time. A.52. STORAGE What the storage test covers There are many different kinds of persistent on-line storage devices available in systems today. The STORAGE test is designed to test anything that reports an ID_TYPE of "disk" in the udev database. This includes IDE, SCSI, SATA, SAS, and SSD drives, PCIe SSD block storage devices, as well as SD media, xD media, MemoryStick and MMC cards. The test plan script reads through the udev database and looks for storage devices that meet the above criteria. When it finds one, it records the device and its parent and compares it to the parents of any other recorded devices. It does this to ensure that only devices with unique parents are tested. If the parent has not been seen before, the device is added to the test plan. This speeds up testing as only one device per controller will be tested, as per the Policy Guide. What the test does The STORAGE test performs the following actions on all storage devices with a unique parent: The script looks through the partition table to locate a swap partition that is not on an LVM or software RAID device. If found, it will deactivate it with swapoff and use that space for the test. If no swap is present, the system can still test the drive if it is completely blank (no partitions). Note that the swap device must be active in order for this to work (the test reads /proc/swaps to find the swap partitions) and that the swap partition must not be inside any kind of software-based container (no LVM or software RAID, but hardware RAID would work as it would be invisible to the system). The tool creates a filesystem on the device, either in a swap partition on the blank drive. The filesystem is mounted and the fio or dt command is used to test the device. The fio or dt command is an I/O test program and is a generic test tool capable of testing, reading, and writing to devices. Multiple sets of test patterns verify the functionality of storage devices. After the mounted filesystem test, the filesystem is unmounted and a dt test is performed against the block device, ignoring the file system. The dt test uses the "direct" parameter to handle this. Preparing for the test You should install all the drives and storage controllers that are listed on the official test plan. In the case of multiple storage options, as many as can fit into the system at one time can be tested in a single run, or each storage device can be installed individually and have its own run of the storage test. You can decide on the order of testing and number of controllers present for each test. Each logical drive attached to the system must contain a swap partition in addition to any other partitions, or be totally blank. This is to provide the test with a location to create a filesystem and run the tests. The use of swap partitions will lead to a much quicker test, as devices left blank are tested in their entirety. They will almost always be significantly larger than a swap partition placed on the drive. Note If testing an SD media card, use the fastest card you can obtain. While a Class 4 SD card may take 8 hours or more to run the test, a Class 10 or UHS 1/2 card can complete the test run in 30 minutes or less. When it comes to choosing storage devices for the official test plan, the rule that the review team operates by is "one test per code path". What we mean by that is that we want to see a storage test run using every driver that a controller can use. The scenario of multiple drivers for the same controller usually involves RAID storage of some type. It's common for storage controllers to use one driver when in regular disk mode and another when in RAID mode. Some even use multiple drivers depending on the RAID mode that they are in. The review team will analyze all storage hardware to determine the drivers that need to be used in order to fulfill all the testing requirements. That's why you may see the same storage device listed more than once in the official test plan. Complete information on storage device testing is available in the Policy Guide. Executing the test The storage test is non-interactive. Run the following command and then select the appropriate STORAGE test name from the list that displays. Run time, bare-metal The storage test takes approximately 22 minutes on a 6Gb/s SATA hard drive installed in a 2013-era workstation system. The same test takes approximately 3 minutes on a 6Gb/s SATA solid-state drive installed in a 2013-era workstation system. The required supportable test will add about a minute to the overall run time. Additional resources For more information about appropriate swap file sizing, see What is the recommended swap size for Red Hat platforms? . A.53. Special keys What the test covers The Special keys test captures a variety of input events from the system integrated keyboard. This test runs on systems with batteries only. RHEL version supported RHEL 8.6 and later RHEL 9 What the test does The test captures the following: Non-ACPI-related signals such as volume up and down, volume mute, display backlight brightness up and down, and more. Key presses that send signals associated with global keyboard shortcuts, such as <Meta+E> , which opens the file browser. Executing the test The test is interactive. Run the following command and then select the appropriate Special keys test name from the list that displays. This test requires capturing all input events. During the test, press all the non-standard and multimedia keys on the device. Press the Escape key at any time to end the test and see a list of keys. The test is successful if all the keys that you tested appear in the list. Run time The test takes less than 5 minutes to finish. Any other mandatory or selected tests will add to the overall run time. A.54. supportable What the test covers The supportable test gathers basic information about the host under test (HUT). Red Hat uses this information to verify that the system complies with the certification requisites. What the test does The test has several subtests that perform the following tasks: Confirm that the /proc/sys/kernel/tainted file contains a zero ( 0 ), which indicates that the kernel is not tainted. Confirm that package verification with the rpm -V command shows that no files have been modified. Confirm that the rpm -qa kernel command shows that the buildhost of the kernel package is a Red Hat server. Record the boot parameters from the /proc/cmdline file. Confirm that the`rpm -V redhat-certification` command shows that no modifications have been made to any of the certification test suite files. Confirm that all the modules shown by the lsmod command show up in a listing of the kernel files with the rpm -ql kernel command. Confirm that all modules are on the Kernel Application Binary Interface (kABI) stablelist . Confirm that the module vendor and buildhost are appropriate Red Hat entries. Confirm that the kernel is the GA kernel of the Red Hat minor release. The subtest tries to verify the kernel with data from the redhat-certification package. If the kernel is not present, the subtest attempts to verify the kernel by using the Internet connection. To verify the kernel by using the Internet connection, you must either configure the HUT's routing and DNS resolution to access the Internet or set the ftp_proxy=http://proxy.domain:80 environment variable. Check for any known hardware vulnerabilities reported by the kernel. The subtest reads the files in the /sys/devices/system/cpu/vulnerabilities/ directory and exits with a warning if the files contain the word "Vulnerable". Confirm if the system has any offline CPUs by checking the output of the lscpu command. Confirm if Simultaneous Multithreading (SMT) is available, enabled, and active in the system. Check if there is unmaintained hardware or drivers in systems running RHEL 8 or later. Unmaintained hardware and drivers are no longer tested or updated on a routine basis. Red Hat may fix serious issues, including security issues, but you cannot expect updates on any planned cadence. Replace or remove unmaintained hardware or drivers as soon as possible. Check if there is deprecated hardware or drivers in systems running RHEL 8 or later. Deprecated hardware and drivers are still tested and maintained, but they are planned to become unmaintained and eventually disabled in a future release. Replace or remove deprecated devices or hardware as soon as possible. Check if there is disabled hardware in systems running RHEL 8 or later. RHEL cannot use disabled hardware. Replace or remove the disabled hardware from your system before running the test again. Run the following checks on the software RPM packages: Check the RPM build host information to isolate non-Red Hat packages. The test will ask you to explain the reasons for including the non-Red Hat packages. Red Hat will review the reasons and approve or reject each package individually. Check that the installed RPM packages are from the Red Hat products available in the offering and have not been modified. Red Hat reviews verification failures in the rpm_verification_report.log file. You will need to reinstall the failed packages and rerun the test. Check the presence of both Red Hat and non-Red Hat firmware files in the system. It lists the non-Red Hat files, if present, and exits with REVIEW status. Check the page size of systems by getconf PAGESIZE command. After performing these tasks, the test gathers a sosreport and the output of the dmidecode command. Executing the test The rhcert tool runs the supportable test automatically as part of every run of the test suite. The supportable test runs before any other test. The output of the supportable test is required as part of the test suite logs. Red Hat will reject test logs that do not contain the output of the supportable test. Use the following command to run the test manually, if required: USD rhcert-cli run --test supportable Run time The supportable test takes around 1 minute on a 2013-era, single CPU, 3.3GHz, 6-core or 12-thread Intel workstation with 8 GB of RAM running Red Hat Enterprise Linux 6.4, AMD64, and Intel 64 that was installed using the Kickstart files in this guide. The time will vary depending on the speed of the machine and the number of RPM files that are installed. A.55. suspend What the test covers (Laptops ony) The suspend test covers suspend/resume from S3 sleep state (suspend to RAM) and suspend/resume from S4 hibernation (suspend to disk). The test also covers the freeze (suspend to idle - s2idle) state that allows more energy to be saved. This test is only scheduled on systems that have built-in batteries, such as laptops. Important The suspend to RAM and suspend to disk abilities are essential characteristics of laptops. We therefore schedule an automated suspend test at the beginning of all certification test runs on a laptop. This ensures that all hardware functions normally post-resume. The test will always run on a laptop, much like the supportable test, regardless of what tests are scheduled. What the test does The test queries the /sys/power/state file and determines which states are supported by the hardware. If it sees "mem" in the file, it schedules the S3 sleep test. If it sees "disk" in the file, it schedules the S4 hibernation test. If it sees both, it schedules both. What follows is the procedure for a system that supports both S3 and S4 states. If your system does not support both types it will only run the tests related to the supported type. Suspend states on RHEL 8 and RHEL 9 are written in the /sys/power/state file. If S3 sleep is supported, the script uses the pm-suspend command to suspend to RAM. The tester wakes the system up after it sleeps and the scripts check the exit code of pm-suspend to verify that the system woke up correctly. Testing then continues on the test server interface. If S4 hibernation is supported, the script uses the use the pm-suspend command to suspend to disk. The tester wakes the system up after it hibernates and the scripts check the exit code of pm-suspend to verify that the system woke up correctly. Testing then continues on the test server interface. If S3 sleep is supported, the tester is prompted to press the key that manually invokes it (a kbd:[Fn]+kbd:[F-key] combination or dedicated kbd:[Sleep] key) if such a key is present. The tester wakes the system up after it sleeps and the scripts check the exit code of pm-suspend to verify that the system woke up correctly. Testing then continues on the test server interface. If the system has no suspend key, this section can be skipped. If S4 hibernation is supported, the tester is prompted to press the key that manually invokes it (a kbd:[Fn]+kbd:[F-key] combination or dedicated kbd:[Hibernate] key) if such a key is present. The tester wakes the system up after it hibernates and the scripts check the exit code of pm-suspend to verify that the system woke up correctly. Testing then continues on the test server interface. If the system has no suspend key, this section can be skipped. Preparing for the test Ensure that a swap file large enough to hold the contents of RAM was created when the system was installed. Someone must be present at the Host Under Test in order to wake it up from suspend and hibernate. Executing the test The suspend test is interactive. Run the following command and then select the appropriate suspend test name from the list that displays. The test will prompt suspend? Answer Yes to suspend the laptop. The test server will display waiting for response after it sends the suspend command. Check the laptop and confirm that it has completed suspending, then press the power button or any other key that will wake it from suspend. The test server will continuously monitor the host under test to see if it has awakened. Once it has woken up, the test server GUI will display the question Has resume completed? . Press the Yes or No button to tell the test server what happened. The server will then continue to the hibernate test. Again, click the Yes button under the suspend? question to put the laptop into hibernate mode. The test server will display waiting for response after it sends the hibernate command. Check the laptop and confirm that it has completed hibernating, then press the power button or any other key that will wake it from hibernation. The test server will continuously monitor the Host Under Test to see if it has awakened. Once it has woken up, the test server GUI will display the question has resume completed? . Press the Yes or No button to tell the test server what happened. the test server will ask you if the system has a keyboard key that will cause the Host Under Test to suspend. If it does, click the Yes button under the question Does this system have a function key (Fn) to suspend the system to mem? . Follow the procedure described above to verify suspend and wake the system up to continue with testing. Finally the test server will ask you if the system has a keyboard key that will cause the Host Under Test to hibernate. If it does, click the Yes button under the question Does this system have a function key (Fn) to suspend the system to disk? Follow the procedure described above to verify hibernation and wake the system up to continue with any additional tests you have scheduled. Run time The suspend test takes about 6 minutes on a 2012-era laptop with 4GB of RAM and a non-SSD hard drive. This is the time for a full series of tests, including both pm-suspend-based and function-key-based suspend and hibernate runs. The time will vary depending on the speed at which the laptop can write to disk, the amount and speed of the RAM installed, and the capability of the laptop to enter suspend and hibernate states through function keys. The required supportable test will add about a minute to the overall run time. Additional resources For more information about appropriate swap file sizing, see What is the recommended swap size for Red Hat platforms? . A.56. tape What the test covers The tape test covers all types of tape drives. Any robots associated with the drives are not tested by this test. What the test does The test uses the mt command to rewind the tape, then it does a tar of the /usr directory and stores it on the tape. A tar compare is used to determine if the data on the tape matches the data on the disk. If the data matches, the test passes. Preparing for the test Insert a tape of the appropriate size into the drive. Executing the test The tape test is non-interactive. Run the following command and then select the appropriate tape test name from the list that displays. A.57. Thunderbolt3 What the test covers The Thunderbolt3 test covers Thunderbolt 3 ports from a hot plug and basic functionality standpoint, ensuring that all ports can be accessed by the OS and devices attached to the ports are properly added and removed. RHEL version supported RHEL 8 RHEL 9 What the test does The purpose of the test is to ensure that all Thunderbolt 3 ports present in a system function as expected. It asks for the number of available Thunderbolt3 ports and then asks the tester to plug and unplug a Thunderbolt 3 device into each port. The test watches for Thunderbolt 3 device attach and detach events and records them. If it detects both plug and unplug events for the number of unique ports the tester entered, the test will pass. Note, while Thunderbolt 3 devices use the same physical connector as USB C devices, USB C devices are not Thunderbolt 3 devices. The test will not pass if USB C devices are used including USB C devices that claim compatibility with Thunderbolt 3 ports. Only Thunderbolt 3 devices can be used for this test. Preparing for the test Count the available Thunderbolt3 ports and have an available Thunderbolt3 device to use during the test. Executing the test The Thunderbolt3 test is interactive. Run the following command and then select the appropriate Thunderbolt3 test name from the list that displays. When prompted by the system, enter the number of available Thunderbolt3 ports present on the system. The system will ask for a Thunderbolt3 device to be plugged into a port and will then pause until the tester presses y to continue. The system will then ask for the device to be unplugged and again will pause until the tester presses y to continue. These steps repeat for the number of ports that were entered. Note that there is no right or wrong order for testing the ports, but each port must be tested only once. Run time The Thunderbolt3 test takes about 15 seconds per Thunderbolt3 port. This includes the time to manually plug in the device, scan the port, unplug the device, and scan the port again. Any other mandatory or selected tests will add to the overall run time. A.58. Thunderbolt4 What the test covers The Thunderbolt4 test covers Thunderbolt 4 ports from a hot plug and basic functionality standpoint, ensuring that all ports can be accessed by the OS and devices attached to the ports are properly added and removed. RHEL version supported RHEL 8 RHEL 9 What the test does The purpose of the test is to ensure that all Thunderbolt 4 ports present in a system function as expected. It asks for the number of available Thunderbolt4 ports and then asks the tester to plug and unplug a Thunderbolt 4 device into each port. The test watches for Thunderbolt 4 device attach and detach events and records them. If it detects both plug and unplug events for the number of unique ports the tester entered, the test will pass. Note, while Thunderbolt 4 devices use the same physical connector as USB C devices, USB C devices are not Thunderbolt 4 devices. The test will not pass if USB C devices are used including USB C devices that claim compatibility with Thunderbolt 4 ports. Only Thunderbolt 4 devices can be used for this test. This test also validates that the generation of connection between the host and the connected device is Thunderbolt 4. Preparing for the test Count the available Thunderbolt4 ports and have an available Thunderbolt4 device to use during the test. Executing the test The Thunderbolt4 test is interactive. Run the following command and then select the appropriate Thunderbolt4 test name from the list that displays. When prompted by the system, enter the number of available Thunderbolt4 ports present on the system. The system will ask for a Thunderbolt4 device to be plugged into a port and will then pause until the tester presses y to continue. The system will then ask for the device to be unplugged and again will pause until the tester presses y to continue. These steps repeat for the number of ports that were entered. Note that there is no right or wrong order for testing the ports, but each port must be tested only once. Run time The Thunderbolt4 test takes about 15 seconds per Thunderbolt4 port. This includes the time to manually plug in the device, scan the port, unplug the device, and scan the port again. Any other mandatory or selected tests will add to the overall run time. A.59. usb_storage What the test covers The usb_storage test adds speed detection functionality to the existing storage test. The usb_storage test comprises: USB2_storage test to detect the version of the connected USB device USB3_storage test to detect the version and interface speed of the connected USB device and supports multiple speeds (5Gbps, 10Gbps, 20Gbps, 40Gbps) RHEL version supported RHEL 8 RHEL 9 What the test does The test detects the interface speed and version of the USB device connected to the system and accordingly plans the corresponding test. For example, if a USB 3.0 device with a supported interface speed of 10Gbps is detected, the USB3_10Gbps_Storage subtest will be planned and executed. Preparing for the test Ensure that the USB storage device is connected to the system. Executing the test You can choose either way to run the test: Run the following command and then select the appropriate USB test name from the list that displays. Run the rhcert-cli command by specifying the desired test name. For example, Additional resources For more information on the rest of the test functionality, see STORAGE . A.60. USB2 What the test covers The USB2 test covers USB2 ports from a basic functionality standpoint, ensuring that all ports can be accessed by the OS. What the test does The purpose of the test is to ensure that all USB2 ports present in a system function as expected. It asks for the number of available USB2 ports (minus any that are in use for keyboard/mouse, etc.) and then asks the tester to plug and unplug a USB2 device into each port. The test watches for attach and detach events and records them. If it detects both plug and unplug events for the number of unique ports the tester entered, the test will pass. Preparing for the test Count the available USB2 ports and have a spare USB2 device available to use during the test. You may need to trace the USB ports from the motherboard header(s) to distinguish between USB2 and USB3 ports. Executing the test The USB2 test is interactive. Run the following command and then select the appropriate USB2 test name from the list that displays. When prompted by the system, enter the number of available USB2 ports present on the system. Don't count any that are currently in use by keyboards or mice. The system will ask for the test USB2 device to be plugged into a port and will then pause until the tester presses y to continue. The system will then ask for the device to be unplugged and again will pause until the tester presses y to continue. These steps repeat for the number of ports that were entered. Note that there is no right or wrong order for testing the ports, but each port must be tested only once. Run time The USB2 test takes about 15 seconds per USB2 port. This includes the time to manually plug in the device, scan the port, unplug the device, and scan the port again. The required supportable test will add about a minute to the overall run time. A.61. USB3 What the test covers The USB3 test covers USB3 ports from a basic functionality standpoint, ensuring that all ports can be enumerated, accessed, and hot plugged by the OS. The USB3 test supports three different speed-based tests, for each 5Gbps, 10Gbps, and 20Gbps. All three tests are planned if the system supports USB3. Successful credit for each test will result in the corresponding feature included in the Red Hat Ecosystem Catalog for the certification. The tests and their success criteria are as follows: Success criteria: USB3_5Gbps - The test will pass when the device transfer speed is 5Gbps. USB3_10Gbps - The test will pass when the device transfer speed is 10Gbps. USB3_20Gbps - The test will pass when the device transfer speed is 20Gbps. What the test does The purpose of the test is to ensure that all USB3 ports present in a system function as expected. It asks for the number of available USB3 ports (minus any that are in use for keyboard/mouse, etc.) and then asks the tester to plug and unplug a USB3 device into each port. The test watches for attach and detach events and records them. If it detects both plug and unplug events for the number of unique ports the tester entered, the test will pass. Preparing for the test Count the available USB3 ports and have an available USB3 device to use during the testing. You may need to trace the USB ports from the motherboard header(s) to distinguish between USB2 and USB3 ports. Ensure that the line speed of the device matches the expected speed of the test, that is, 5Gbps, 10 Gbps, or 20Gbps. Executing the test The USB3 test is interactive. Run the following command and then select the appropriate USB3 test name from the list that displays. When prompted by the system, enter the number of available USB3 ports present on the system. Don't count any that are currently in use by keyboards or mice. The system will ask for the test USB3 device to be plugged into a port and will then pause until the tester presses y to continue. The system will then ask for the device to be unplugged and again will pause until the tester presses y to continue. These steps repeat for the number of ports that were entered. Note that there is no right or wrong order for testing the ports, but each port must be tested only once. Run time The USB3 test takes about 15 seconds per USB3 port. This includes the time to manually plug in the device, scan the port, unplug the device, and scan the port again. The required supportable test will add about a minute to the overall run time. A.62. USB4 What the test covers The USB4 test covers USB4 ports from a basic functionality standpoint, ensuring that all ports can be enumerated, accessed, and hot plugged by the OS. The USB4 test supports two different speed-based tests, one for 20Gbps and one for 40Gbps. Both tests are planned if the system supports USB4. Successful credit for each test will result in the corresponding feature included in the Red Hat Ecosystem Catalog for the certification. The tests and their success criteria are as follows: Success criteria USB4_20Gbps - The test will pass when the device transfer speed is 20Gbps. USB4_40Gbps - The test will pass when the device transfer speed is 40Gbps. What the test does The purpose of the test is to ensure that all USB4 ports present in a system function as expected. It asks for the number of available USB4 ports (minus any that are in use for keyboard/mouse, etc.) and then asks the tester to plug and unplug a USB4 device into each port. The test watches for attach and detach events and records them. If it detects both plug and unplug events for the number of unique ports the tester entered, the test will pass. Preparing for the test: Count the available USB4 ports and have an available USB4 device to use during the testing. You may need to trace the USB ports from the motherboard header(s) to distinguish between USB2, USB3, and USB4 ports. Ensure that the line speed of the device matches the expected speed of the test, that is, 20Gbps or 40Gbps. Executing the test The USB4 test is interactive. Run the following command and then select the appropriate USB4 test name from the list that displays. When prompted by the system, enter the number of available USB4 ports present on the system. Don't count any that are currently in use by keyboards or mice. The system will ask for the test USB4 device to be plugged into a port and will then pause until the tester presses y to continue. The system will then ask for the device to be unplugged and again will pause until the tester presses y to continue. These steps repeat for the number of ports that were entered. Note that there is no right or wrong order for testing the ports, but each port must be tested only once. Run time The USB4 test takes about 15 seconds per USB4 port. This includes the time to manually plug in the device, scan the port, unplug the device, and scan the port again. The required supportable test will add about a minute to the overall run time. A.63. VIDEO What the test covers For RHEL 8, the VIDEO test checks for all removable or integrated video hardware on the motherboard. Devices are selected for testing by their PCI class ID. Specifically, the test checks for a device with a PCI class as Display Controller in the udev command output. For RHEL 9, the VIDEO test remains the same. However, for framebuffer graphic solutions, the test is planned after it identifies if the display kernel driver is in use as a framebuffer and if direct rendering is not supported using the glxinfo command. What the test does The test runs multiple subtests: Check Connections - Logs the xrandr command output. This subtest is optional, and its failure does not affect the overall test result. Set Configuration - Checks the necessary configuration prerequisites like setting the display depth, flags, and configurations for the subtest. The X Server Test - Starts another display server using the new configuration file and runs the glxgears , a lightweight MESA OpenGL demonstration program to check the performance. Log Module and Drivers - Runs xdpyinfo to determine the screen resolution and color depth. Along with that, the configuration file created at the start of the test should allow the system to run at the maximum resolution capability. Finally, the test uses grep to search through the /var/log/Xorg.0.log logfile to determine in-use modules and drivers. Preparing for the test Ensure that the monitor and video card in the system can run at a resolution of 1024x768 with a color depth of 24 bits per pixel (bpp). Higher resolutions or color depths are also acceptable. Check the xrandr command output for 1024x768 at 24 bpp or higher to confirm. If you do not see all the resolutions that the card or monitor combination can generate, ensure to remove any KVM switches between the monitor and video card. Executing the test The test is non-interactive. Run the following command and then select the appropriate VIDEO test name from the list that displays. First, the test system screen will go blank, and then a series of test patterns from the x11perf test program will appear. When the test finishes, it will return to the desktop or the virtual terminal screen. Run time The test takes about 1 minute to complete. Any other mandatory or selected tests will add to the overall run time. A.64. VIDEO_PORTS What the test covers The VIDEO_PORTS test checks whether all the graphics outputs ports of each graphics processor in the system are functioning. The test runs on machines that have one or more graphics output ports. Machines with one or more embedded or add-on graphics processors are also supported, including laptops with ports wired to integral panels. The test does not run on a port if it does not detect a display connected to that port. RHEL version supported RHEL 9 What the test does The test performs the following actions: The test runs through each port that it detects as having a monitor connected. The test then launches a glmark2 window and prompts you to drag the test window to each connected display. If the test detects additional ports that are untested, it goes into interactive mode. It prompts you to attach a display to each untested port and to repeat the test. The test continues to run in this loop until it has tested all detected ports, or until you indicate that the untested ports are not usable by customers. If there are unusable ports, the test prompts you for clarification. When the loop exits, the test displays a PASS result if all ports have been tested or a REVIEW result if some ports were identified as unusable. Preparing for the test Prepare a set of monitors that have the appropriate connectors for your system. This includes a built-in monitor and at least one external monitor. If there are less monitors than ports, the test will run in loops and allow you to connect the displays to ports in batches. The built-in monitor must continue working in addition to each of the external monitors attached. There may be more electronic than physical ports, meaning that the hardware supports more displays than the system makes available to the user. The list of ports displayed on the screen when the test begins is not relevant to the test. There may be more physical ports in the system than can be used all at once. There may also be ghost ports such as service ports or USBs. You must be able to differentiate between a port that is not functioning due to incompatibility with another port or because it is a ghost port, and between a port that is not functioning at all. Executing the test The VIDEO_PORTS test is interactive. Before executing the test, connect a monitor to at least one of the graphics output ports. Provision the system: Run this command: When prompted, enter the path of the test plan saved on your system. If prompted, provide the hostname or the IP address of the test server to set up a passwordless SSH. You will only be prompted the first time you add a new system. Start the test: Note The test starts by listing a set of internal displays, both connected and disconnected. These do not represent the physical ports being tested. For each connected graphics output port, follow the steps below: Wait for the test to identify the port. When prompted, press any key to continue. The glmark2 window opens. Move this window to the monitor connected to the port, if different from the active monitor. The glmark2 benchmark measures various aspects of OpenGL (ES) 2.0 performance on the identified display. The benchmark invokes a series of images, which test different combinations of surface, angle, color, and light. Wait for the glmark2 window to close. You will see a glmark2 score and a Test passed message for each successful test. For each unconnected graphics output port, follow the steps below: When prompted, connect a monitor to the graphics port and enter yes to continue. On the first prompt, you can enter no to end the GRAPHICS_PORTS test. For each additional prompt, a timer is displayed that gives you 20 seconds to connect the monitor. The timer is repeated three times before timeout. Wait for the test to identify the port. When prompted, press any key to continue. Move the glmark2 window to the monitor connected to the port. Wait for the window to close. When there are no additional ports to connect, let the timer run for 60 seconds until timeout. The test exits with a PASS result if all ports were tested successfully. Optionally save the test results to a log file: Access the log file from your browser, by navigating to the location of the log files. Run time The test time varies according to the number of ports being tested. Each port takes around 2-3 minutes to test. Additional factors impacting test time include system performance, such as memory frequency and CPU. Any other mandatory or selected tests will add to the overall run time. A.65. VIDEO_DRM What the test covers The VIDEO_DRM test verifies the graphics controller, which utilizes a native DRM kernel driver with basic graphics support. The test will plan if: The display driver in use is identified as a kernel mode-setting driver. The display driver is not a framebuffer. The direct rendering is not supported as identified by the glxinfo command, and the OpenGL renderer string is llvmpipe . RHEL version supported RHEL 9 What the test does The test verifies the functionality of the graphics controller similar to the VIDEO test. Preparing for the test Ensure that the monitor and video card in the system can run at a resolution of 1024x768 with a color depth of 24 bits per pixel (bpp). Higher resolutions or color depths are also acceptable. Check the xrandr command output for 1024x768 at 24 bpp or higher to confirm. If you do not see all the resolutions that the card or monitor combination can generate, ensure to remove any KVM switches between the monitor and video card. Executing the test The test is non-interactive. Run the following command and then select the appropriate VIDEO_DRM test name from the list that displays. First, the test system screen will go blank, and then a series of test patterns from the x11perf test program will appear. When the test finishes, it will return to the desktop or the virtual terminal screen. Run time The test takes about 1 minute to complete. Any other mandatory or selected tests will add to the overall run time. A.66. VIDEO_DRM_3D What the test covers The VIDEO_DRM_3D test verifies the graphics controller, which utilizes a native DRM kernel driver with accelerated graphics support. The test will plan if: The display driver in use is identified as a kernel mode-setting driver. The display driver is not a framebuffer. The direct rendering is supported as identified by the glxinfo command, and the OpenGL renderer string is not llvmpipe . The test uses Prime GPU Offloading technology to execute all the video test subtests. RHEL version supported RHEL 9 What the test does The test verifies the functionality of the graphics controller similar to the VIDEO test. In addition, the test runs the following subtests: Vulkaninfo test - Logs the vulkaninfo command output to collect the Vulkan information such as device properties of identified GPUs, Vulkan extensions supported by each GPU, recognized layers, supported image formats, and format properties. Glmark2 benchmarking test - Runs the glmark2 command to generate the score based on the OpenGL 2.0 & ES 2.0 benchmark set of tests and confirms the 3D capabilities. The subtest executes the utility two times with a different set of parameters, first with the Hardware renderer and later with the Software renderer. If the Hardware renderer command-run results in a better score than software, the test passes successfully, confirming the display controller has better 3D capabilities, otherwise fails. Preparing for the test Ensure that the monitor and video card in the system can run at a resolution of 1024x768 with a color depth of 24 bits per pixel (bpp). Higher resolutions or color depths are also acceptable. Check the xrandr command output for 1024x768 at 24 bpp or higher to confirm. If you do not see all the resolutions that the card or monitor combination can generate, ensure to remove any KVM switches between the monitor and video card. Executing the test The test is non-interactive. Run the following command and then select the appropriate VIDEO_DRM_3D test name from the list that displays. First, the test system screen will go blank, and then a series of test patterns from the x11perf test program will appear. When the test finishes, it will return to the desktop or the virtual terminal screen. Run time The test takes about 1 minute to complete. Any other mandatory or selected tests will add to the overall run time. A.67. WirelessG What the test covers The WirelessG test is run on all wireless Ethernet connections with a maximum connection speed of 802.11g. What the test does This is a new test that combines the existing wlan and network tests. In addition to passing all the existing network test items, this test must detect a "g" link type as reported by iw and demonstrate a minimum throughput of 22Mb/s in order to pass. Additional resources For more information on the rest of the test functionality, see network . A.68. WirelessN What the test covers The WirelessN test is run on all wireless Ethernet connections with a maximum connection speed of 802.11n. What the test does This is a new test that combines the existing wlan and network tests. In addition to passing all the existing network test items, this test must detect an "n" link type as reported by iw and demonstrate a minimum throughput of 100Mb/s in order to pass. Additional resources For more information on the rest of the test functionality, see network . A.69. WirelessAC What the test covers The WirelessAC test is run on all wireless Ethernet connections with a maximum connection speed of 802.11ac. What the test does This is a new test that combines the existing wlan and network tests. In addition to passing all the existing network test items, this test must detect an "ac" link type as reported by iw and demonstrate a minimum throughput of 300Mb/s in order to pass. Additional resources For more information about the rest of the test functionality, see network . A.70. WirelessAX (Superseded by WiFi6) What the test covers The WirelessAX test is run on all wireless Ethernet connections with a maximum connection speed of 802.11ax. What the test does The test detects "ax" link type reported by iw and matches the product name having "wifi 6" or "AX" to decide the device has AX Support. The test for Wireless AX is also planned if the device passes Wireless AC Test and demonstrates a minimum throughput of 1200 Mb/s in order to pass. This test is not planned automatically but can be planned manually via CLI. Instead, WiFi6 test is planned automatically. A.71. WiFi6 What the test covers The WiFi6 test is run on all wireless Ethernet connections with a maximum connection speed of 802.11ax. What the test does The test detects "ax" link type reported by iw and matches the product name having "wifi 6" or "AX" to decide if the device has AX Support. The test for WiFi6 is also planned if the device passes Wireless AC Test and demonstrates a minimum throughput of 1200 Mb/s in order to pass. A.72. WiFi6E What the test covers The WiFi6E test is run on all wireless Ethernet connections with a maximum connection speed of 802.11ax utilizing the 6GHz frequency band. RHEL version supported RHEL 8 RHEL 9 What the test does The test detects "ax" link type reported by iw and matches the product name containing "wifi 6E" or "AX" to decide if the device has AX Support. The test for WiFi6E is also planned if the device passes Wireless AC Test and demonstrates a minimum throughput of 6000 Mb/s in order to pass. A.73. Manually adding and running the tests On rare occasions, tests may fail to plan due to problems with hardware detection or other issues with the hardware, OS, or test scripts. If this happens you should get in touch with your Red Hat support contact for further assistance. They will likely ask you to open a support ticket for the issue, and then explain how to manually add a test to your local test plan using the rhcert-cli command on the HUT. Any modifications you make to the local test plan will be sent to the test server, so you can continue to use the web interface on the test server to run your tests. The command is run as follows: The options for the rhcert-cli command used here are: plan - Modify the test plan --add - Add an item to the test plan --test=<testname> - The test to be added. The test names are as follows: hwcert/suspend hwcert/audio hwcert/battery hwcert/lid hwcert/usbbase/expresscard hwcert/usbbase/usbbase/usb2 hwcert/usbbase/usbbase/usb3 hwcert/kdump hwcert/network/Ethernet/100MegEthernet hwcert/network/Ethernet/1GigEthernet hwcert/network/Ethernet/10GigEthernet hwcert/network/Ethernet/40GigEthernet hwcert/network/wlan/WirelessG hwcert/network/wlan/WirelessN hwcert/memory hwcert/core hwcert/cpuscaling hwcert/fvtest/fv_core hwcert/fvtest/fv_live_migration hwcert/fvtest/fv_memory hwcert/fvtest/fv_network hwcert/fvtest/fv_storage hwcert/fvtest/fv_pcie_storage_passthrough hwcert/fvtest/fv_pcie_network_passthrough hwcert/fvtest/fv_usb_storage_passthrough hwcert/fvtest/fv_usb_network_passthrough hwcert/fvtest/fv_cpu_pinning hwcert/profiler hwcert/storage hwcert/video hwcert/supportable hwcert/optical/bluray hwcert/optical/dvd hwcert/optical/cdrom hwcert/fencing hwcert/realtime hwcert/reboot hwcert/tape hwcert/rdma/Infiniband_QDR hwcert/rdma/Infiniband_FDR hwcert/rdma/Infiniband_EDR hwcert/rdma/Infiniband_HDR hwcert/rdma/Infiniband_Socket_Direct hwcert/rdma/10GigRoCE hwcert/rdma/20GigRoCE hwcert/rdma/25GigRoCE hwcert/rdma/40GigRoCE hwcert/rdma/50GigRoCE hwcert/rdma/100GigRoCE hwcert/rdma/200GigRoCE hwcert/rdma/10GigiWarp hwcert/rdma/20GigiWarp hwcert/rdma/25GigiWarp hwcert/rdma/40GigiWarp hwcert/rdma/50GigiWarp hwcert/rdma/100GigiWarp hwcert/rdma/200GigiWarp hwcert/rdma/Omnipath hwcert/network/Ethernet/2_5GigEthernet hwcert/network/Ethernet/5GigEthernet hwcert/network/Ethernet/20GigEthernet hwcert/network/Ethernet/25GigEthernet- hwcert/network/Ethernet/50GigEthernet hwcert/network/Ethernet/100GigEthernet hwcert/network/Ethernet/200GigEthernet rhcert/self-check hwcert/sosreport hwcert/storage/U2 SATA hwcert/storage/M2 SATA hwcert/storage/SATA_SSD hwcert/storage/SATA hwcert/storage/SAS_SSD hwcert/storage/SAS hwcert/storage/U2_NVME hwcert/storage/M2_NVME hwcert/storage/PCIE_NVME hwcert/storage/NVDIMM hwcert/storage/STORAGE The other options are only needed if a device must be specified, like in the network and storage tests that need to be told which device to run on. There are various places you would need to look to determine the device name or UDI that would be used here. Support can help determine the proper name or UDI. Once found, you would use one of the following two options to specify the device: --device=<devicename> - The device that should be tested, identified by a device name such as "enp0s25" or "host0". --udi=<UDI> - The unique device ID of the device to be tested, identified by a UDI string. Run the rhcert-cli command by specifying the test name: for example: You can specify --device to run the specific device: for example: Note It is advisable to use rhcert-cli or rhcert-run independently and save the results. Mixing the use of both rhcert-cli and rhcert-run and saving the results together may result in the inability to process the results correctly. Revised on 2025-03-12 15:13:09 UTC
[ "rhcert-run", "rhcert-cli run --test=<test name>", "rhcert-run", "E: SUBSYSTEM=sound E: SOUND_INITIALIZED=1", "rhcert-run", "rhcert-run", "POWER_SUPPLY_TYPE=Battery", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "/sys/devices/system/cpu/cpu X /cpufreq", "cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies", "rhcert-run", "rhcert-run", "ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 2 Transceiver: internal Auto-negotiation: on MDI-X: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes", "ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: Unknown! Duplex: Unknown! (255) Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: no", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-cli plan --add -t fv_live_migration", "rhcert-cli run -t fv_live_migration --server=<server name>", "rhcert-cli plan --add -t fv_live_migration", "rhcert-cli run --test fv_live_migration --server=<server name>", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-cli plan --add --test Infiniband_QDR --device <devicename>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test Infiniband_QDR --server <test server IP addr>", "rhcert-cli plan --add --test Infiniband_FDR --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test Infiniband_FDR --server <test server IP addr>", "rhcert-cli plan --add --test Infiniband_EDR --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test Infiniband_EDR --server <test server IP addr>", "rhcert-cli plan --add --test Infiniband_HDR --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test Infiniband_HDR --server <test server IP addr>", "rhcert-cli plan --add --test Infiniband_NDR --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test Infiniband_NDR --server <test server IP addr>", "rhcert-cli plan --add --test Infiniband_Socket_Direct", "rhcert-cli run --test Infiniband_Socket_Direct --server <test server IP addr>", "rhcert-run", "ls /sys/firmware/efi/", "rhcert-run", "rhcert-cli plan --add --test iPXE", "rhcert-cli run --test iPXE", "rhcert-run", "rhcert-cli plan --add --test 10GigiWarp --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 10Gigiwarp --server <test server IP addr>", "rhcert-cli plan --add --test 20GigiWarp --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 20GigiWarp --server <test server IP addr>", "rhcert-cli plan --add --test 25GigiWarp --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 25GigiWarp --server <test server IP addr>", "rhcert-cli plan --add --test 40GigiWarp --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 40GigiWarp --server <test server IP addr>", "rhcert-cli plan --add --test 50GigiWarp --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 50GigiWarp --server <test server IP addr>", "rhcert-cli plan --add --test 100GigiWarp --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 100GigiWarp --server <test server IP addr>", "rhcert-cli plan --add --test 200GigiWarp --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 200GigiWarp --server <test server IP addr>", "rhcert-run", "rhcert-cli run --test=kdump --server=<test server's IP>", "rhcert-cli run --test=kdump --device=local", "rhcert-cli run --test=kdump --device=nfs --server=<test server's IP>", "rhcertd start", "E: NAME=\"Lid Switch\"", "rhcert-run", "rhcert-run", "sudo dd if=/dev/zero of=/swapfile bs=1M count=8000 chmod 600 /swapfile mkswap /swapfile swapon /swapfile swapon -s edit file /etc/fstab and add the following line: /swapfile swap swap defaults 0 0 write file and quit/exit", "lspci -d ::0502", "daxctl reconfigure-device --mode=system-ram <dax device-id> --no-movable --no-online daxctl offline-memory <dax device-id> daxctl reconfigure-device --mode=system-ram <dax device-id> --no-movable", "rhcert-run", "rhcert-run", "rhcert-cli plan --add --test 1GigEthernet --device <device name>", "rhcert-cli run --test 1GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 10GigEthernet --device <device name>", "rhcert-cli run --test 10GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 20GigEthernet --device <device name>", "rhcert-cli run --test 20GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 25GigEthernet --device <device name>", "rhcert-cli run --test 25GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 40GigEthernet --device <device name>", "rhcert-cli run --test 40GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 50GigEthernet --device <device name>", "rhcert-cli run --test 50GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 100GigEthernet --device <device name>", "rhcert-cli run --test 100GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 200GigEthernet --device <device name>", "rhcert-cli run --test 200GigEthernet --server <test server IP addr>", "rhcert-cli plan --add --test 400GigEthernet --device <device name>", "rhcert-cli run --test 400GigEthernet --server <test server IP addr>", "rhcert-run", "rhcert-cli plan --add --test Omnipath --device <device name>_devicePort_<port number>", "rhcert-cli run --test Omnipath --server <test server IP addr>", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-cli plan --add --test 10GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 10GigRoCE --server <test server IP addr>", "rhcert-cli plan --add --test 20GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 20GigRoCE --server <test server IP addr>", "rhcert-cli plan --add --test 25GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 25GigRoCE --server <test server IP addr>", "rhcert-cli plan --add --test 40GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 40GigRoCE--server <test server IP addr>", "rhcert-cli plan --add --test 50GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 50GigRoCE --server <test server IP addr>", "rhcert-cli plan --add --test 100GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 100GigRoCE --server <test server IP addr>", "rhcert-cli plan --add --test 200GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 200GigRoCE --server <test server IP addr>", "rhcert-cli plan --add --test 400GigRoCE --device <device name>_devicePort_<port number>_netDevice_<net device>", "rhcert-cli run --test 400GigRoCE --server <test server IP addr>", "rhcert-cli plan --add --test M2_SATA --device host0", "rhcert-cli plan --add --test U2_SATA --device host0", "rhcert-cli plan --add --test M2_NVMe --device nvme0", "rhcert-cli plan --add --test U2_NVMe --device nvme0", "rhcert-cli plan --add --test U3_NVMe --device nvme0", "rhcert-cli plan --add --test E3_NVMe --device nvme0", "yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm", "yum install beakerlib", "grubby --args='intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=32' --update-kernel=USD(grubby --default-kernel)", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-cli run --test supportable", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-cli run --test=USB3_10Gbps_Storage", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-run", "rhcert-provision", "rhcert-cli run --test VIDEO_PORTS", "rhcert-save", "rhcert-run", "rhcert-run", "rhcert-cli plan --add --test=<testname> --device=<devicename> --udi-<udi>", "rhcert-cli run --test=<test_name>", "rhcert-cli run --test=audio", "rhcert-cli run --test=<test name> --device=<device name>", "rhcert-cli run --test=kdump --device=nfs" ]
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/assembly-Appendix_hw-test-suite-running-tests-submitting-logs-review
5.129. kdeartwork
5.129. kdeartwork 5.129.1. RHBA-2012:0450 - kdeartwork bug fix update Updated kdeartwork packages that fix one bug are now available for Red Hat Enterprise Linux 6. The K Desktop Environment (KDE) is a graphical desktop environment for the X Window System. The kdeartwork packages include styles, themes and screen savers for KDE. Bug Fix BZ# 736624 Previously, the KPendulum and KRotation screen savers, listed in the OpenGL group of KDE screen savers, produced only a blank screen. This update disables KPendulum and KRotation and none of them is listed in the OpenGL group anymore. All users of kdeartwork are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/kdeartwork
Chapter 4. UserIdentityMapping [user.openshift.io/v1]
Chapter 4. UserIdentityMapping [user.openshift.io/v1] Description UserIdentityMapping maps a user to an identity Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources identity ObjectReference Identity is a reference to an identity kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata user ObjectReference User is a reference to a user 4.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/useridentitymappings POST : create an UserIdentityMapping /apis/user.openshift.io/v1/useridentitymappings/{name} DELETE : delete an UserIdentityMapping GET : read the specified UserIdentityMapping PATCH : partially update the specified UserIdentityMapping PUT : replace the specified UserIdentityMapping 4.2.1. /apis/user.openshift.io/v1/useridentitymappings Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create an UserIdentityMapping Table 4.2. Body parameters Parameter Type Description body UserIdentityMapping schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 202 - Accepted UserIdentityMapping schema 401 - Unauthorized Empty 4.2.2. /apis/user.openshift.io/v1/useridentitymappings/{name} Table 4.4. Global path parameters Parameter Type Description name string name of the UserIdentityMapping Table 4.5. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an UserIdentityMapping Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.7. Body parameters Parameter Type Description body DeleteOptions schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified UserIdentityMapping Table 4.9. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified UserIdentityMapping Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.11. Body parameters Parameter Type Description body Patch schema Table 4.12. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified UserIdentityMapping Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body UserIdentityMapping schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK UserIdentityMapping schema 201 - Created UserIdentityMapping schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/user_and_group_apis/useridentitymapping-user-openshift-io-v1
Chapter 5. OAuthClient [oauth.openshift.io/v1]
Chapter 5. OAuthClient [oauth.openshift.io/v1] Description OAuthClient describes an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description accessTokenInactivityTimeoutSeconds integer AccessTokenInactivityTimeoutSeconds overrides the default token inactivity timeout for tokens granted to this client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. This value needs to be set only if the default set in configuration is not appropriate for this client. Valid values are: - 0: Tokens for this client never time out - X: Tokens time out if there is no activity for X seconds The current minimum allowed value for X is 300 (5 minutes) WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenMaxAgeSeconds integer AccessTokenMaxAgeSeconds overrides the default access token max age for tokens granted to this client. 0 means no expiration. additionalSecrets array (string) AdditionalSecrets holds other secrets that may be used to identify the client. This is useful for rotation and for service account token validation apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources grantMethod string GrantMethod is a required field which determines how to handle grants for this client. Valid grant handling methods are: - auto: always approves grant requests, useful for trusted clients - prompt: prompts the end user for approval of grant requests, useful for third-party clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURIs array (string) RedirectURIs is the valid redirection URIs associated with a client respondWithChallenges boolean RespondWithChallenges indicates whether the client wants authentication needed responses made in the form of challenges instead of redirects scopeRestrictions array ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. scopeRestrictions[] object ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. secret string Secret is the unique secret associated with a client 5.1.1. .scopeRestrictions Description ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. Type array 5.1.2. .scopeRestrictions[] Description ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. Type object Property Type Description clusterRole object ClusterRoleScopeRestriction describes restrictions on cluster role scopes literals array (string) ExactValues means the scope has to match a particular set of strings exactly 5.1.3. .scopeRestrictions[].clusterRole Description ClusterRoleScopeRestriction describes restrictions on cluster role scopes Type object Required roleNames namespaces allowEscalation Property Type Description allowEscalation boolean AllowEscalation indicates whether you can request roles and their escalating resources namespaces array (string) Namespaces is the list of namespaces that can be referenced. * means any of them (including *) roleNames array (string) RoleNames is the list of cluster roles that can referenced. * means anything 5.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclients DELETE : delete collection of OAuthClient GET : list or watch objects of kind OAuthClient POST : create an OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients GET : watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclients/{name} DELETE : delete an OAuthClient GET : read the specified OAuthClient PATCH : partially update the specified OAuthClient PUT : replace the specified OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients/{name} GET : watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/oauth.openshift.io/v1/oauthclients HTTP method DELETE Description delete collection of OAuthClient Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status_v6 schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClient Table 5.3. HTTP responses HTTP code Reponse body 200 - OK OAuthClientList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClient Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body OAuthClient schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 202 - Accepted OAuthClient schema 401 - Unauthorized Empty 5.2.2. /apis/oauth.openshift.io/v1/watch/oauthclients HTTP method GET Description watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/oauth.openshift.io/v1/oauthclients/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method DELETE Description delete an OAuthClient Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status_v6 schema 202 - Accepted Status_v6 schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClient Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClient Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClient Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body OAuthClient schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty 5.2.4. /apis/oauth.openshift.io/v1/watch/oauthclients/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method GET Description watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/oauth_apis/oauthclient-oauth-openshift-io-v1
Chapter 10. Changing the cloud provider credentials configuration
Chapter 10. Changing the cloud provider credentials configuration For supported configurations, you can change how OpenShift Container Platform authenticates with your cloud provider. To determine which cloud credentials strategy your cluster uses, see Determining the Cloud Credential Operator mode . 10.1. Rotating or removing cloud provider credentials After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. 10.1.1. Rotating cloud provider credentials with the Cloud Credential Operator utility The Cloud Credential Operator (CCO) utility ccoctl supports updating secrets for clusters installed on IBM Cloud(R). 10.1.1.1. Rotating API keys You can rotate API keys for your existing service IDs and update the corresponding secrets. Prerequisites You have configured the ccoctl binary. You have existing service IDs in a live OpenShift Container Platform cluster installed. Procedure Use the ccoctl utility to rotate your API keys for the service IDs and update the secrets: USD ccoctl <provider_name> refresh-keys \ 1 --kubeconfig <openshift_kubeconfig_file> \ 2 --credentials-requests-dir <path_to_credential_requests_directory> \ 3 --name <name> 4 1 1 The name of the provider. For example: ibmcloud or powervs . 2 2 The kubeconfig file associated with the cluster. For example, <installation_directory>/auth/kubeconfig . 3 The directory where the credential requests are stored. 4 The name of the OpenShift Container Platform cluster. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. 10.1.2. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources The Cloud Credential Operator in mint mode The Cloud Credential Operator in passthrough mode vSphere CSI Driver Operator 10.1.3. Removing cloud provider credentials For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest custom resources, such as minor cluster version updates. Note Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.16 to 4.17), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . Additional resources The Cloud Credential Operator in mint mode 10.2. Enabling token-based authentication After installing an Microsoft Azure OpenShift Container Platform cluster, you can enable Microsoft Entra Workload ID to use short-term credentials. 10.2.1. Configuring the Cloud Credential Operator utility To configure an existing cluster to create and manage cloud credentials from outside of the cluster, extract and prepare the Cloud Credential Operator utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image}) Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 10.2.2. Enabling Microsoft Entra Workload ID on an existing cluster If you did not configure your Microsoft Azure OpenShift Container Platform cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster. Important The process to enable Workload ID on an existing cluster is disruptive and takes a significant amount of time. Before proceeding, observe the following considerations: Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour. During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready. After starting this process, do not attempt to update the cluster until it is complete. If an update is triggered, the process to enable Workload ID on an existing cluster fails. Prerequisites You have installed an OpenShift Container Platform cluster on Microsoft Azure. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have extracted and prepared the Cloud Credential Operator utility ( ccoctl ) binary. You have access to your Azure account by using the Azure CLI ( az ). Procedure Create an output directory for the manifests that the ccoctl utility generates. This procedure uses ./output_dir as an example. Extract the service account public signing key for the cluster to the output directory by running the following command: USD oc get configmap \ --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output 'go-template={{index .data "service-account-001.pub"}}' > ./output_dir/serviceaccount-signer.public 1 1 This procedure uses a file named serviceaccount-signer.public as an example. Use the extracted service account public signing key to create an OpenID Connect (OIDC) issuer and Azure blob storage container with OIDC configuration files by running the following command: USD ./ccoctl azure create-oidc-issuer \ --name <azure_infra_name> \ 1 --output-dir ./output_dir \ --region <azure_region> \ 2 --subscription-id <azure_subscription_id> \ 3 --tenant-id <azure_tenant_id> \ --public-key-file ./output_dir/serviceaccount-signer.public 4 1 The value of the name parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the --oidc-resource-group-name argument with the existing group name as its value. 2 Specify the region of the existing cluster. 3 Specify the subscription ID of the existing cluster. 4 Specify the file that contains the service account public signing key for the cluster. Verify that the configuration file for the Azure pod identity webhook was created by running the following command: USD ll ./output_dir/manifests Example output total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml 1 The file azure-ad-pod-identity-webhook-config.yaml contains the Azure pod identity webhook configuration. Set an OIDC_ISSUER_URL variable with the OIDC issuer URL from the generated manifests in the output directory by running the following command: USD OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml` Update the spec.serviceAccountIssuer parameter of the cluster authentication configuration by running the following command: USD oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"USD{OIDC_ISSUER_URL}\"}}" Monitor the configuration update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Restarting a pod updates the serviceAccountIssuer field and refreshes the service account public signing key. Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Update the Cloud Credential Operator spec.credentialsMode parameter to Manual by running the following command: USD oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}' Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secret Note This command might take a few moments to run. Set an AZURE_INSTALL_RG variable with the Azure resource group name by running the following command: USD AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'` Use the ccoctl utility to create managed identities for all CredentialsRequest objects by running the following command: USD ccoctl azure create-managed-identities \ --name <azure_infra_name> \ --output-dir ./output_dir \ --region <azure_region> \ --subscription-id <azure_subscription_id> \ --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "USD{OIDC_ISSUER_URL}" \ --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \ 1 --installation-resource-group-name "USD{AZURE_INSTALL_RG}" 1 Specify the name of the resource group that contains the DNS zone. Apply the Azure pod identity webhook configuration for Workload ID by running the following command: USD oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml Apply the secrets generated by the ccoctl utility by running the following command: USD find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {} This process might take several minutes. Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Restarting a pod updates the serviceAccountIssuer field and refreshes the service account public signing key. Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Monitor the configuration update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Optional: Remove the Azure root credentials secret by running the following command: USD oc delete secret -n kube-system azure-credentials Additional resources Microsoft Entra Workload ID Configuring an Azure cluster to use short-term credentials 10.2.3. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 10.3. Additional resources About the Cloud Credential Operator
[ "ccoctl <provider_name> refresh-keys \\ 1 --kubeconfig <openshift_kubeconfig_file> \\ 2 --credentials-requests-dir <path_to_credential_requests_directory> \\ 3 --name <name> 4", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc get configmap --namespace openshift-kube-apiserver bound-sa-token-signing-certs --output 'go-template={{index .data \"service-account-001.pub\"}}' > ./output_dir/serviceaccount-signer.public 1", "./ccoctl azure create-oidc-issuer --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --tenant-id <azure_tenant_id> --public-key-file ./output_dir/serviceaccount-signer.public 4", "ll ./output_dir/manifests", "total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml", "OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`", "oc patch authentication cluster --type=merge -p \"{\\\"spec\\\":{\\\"serviceAccountIssuer\\\":\\\"USD{OIDC_ISSUER_URL}\\\"}}\"", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc patch cloudcredential cluster --type=merge --patch '{\"spec\":{\"credentialsMode\":\"Manual\"}}'", "oc adm release extract --credentials-requests --included --to <path_to_directory_for_credentials_requests> --registry-config ~/.pull-secret", "AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`", "ccoctl azure create-managed-identities --name <azure_infra_name> --output-dir ./output_dir --region <azure_region> --subscription-id <azure_subscription_id> --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 1 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\"", "oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml", "find ./output_dir/manifests -iname \"openshift*yaml\" -print0 | xargs -I {} -0 -t oc replace -f {}", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc delete secret -n kube-system azure-credentials", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/postinstallation_configuration/changing-cloud-credentials-configuration
7.141. opencryptoki
7.141. opencryptoki 7.141.1. RHBA-2015:1278 - opencryptoki bug fix and enhancement update Updated opencryptoki packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The opencryptoki packages contain version 2.11 of the PKCS#11 API, implemented for IBM Cryptocards, such as IBM 4764 and 4765 crypto cards. These packages includes support for the IBM 4758 Cryptographic CoProcessor (with the PKCS#11 firmware loaded), the IBM eServer Cryptographic Accelerator (FC 4960 on IBM eServer System p), the IBM Crypto Express2 (FC 0863 or FC 0870 on IBM System z), and the IBM CP Assist for Cryptographic Function (FC 3863 on IBM System z). The opencryptoki packages also bring a software token implementation that can be used without any cryptographic hardware. These packages contain the Slot Daemon (pkcsslotd) and general utilities. Note The opencryptoki packages have been upgraded to upstream version 3.2, which provides a number of bug fixes and enhancements over the version. (BZ# 1148134 ) Enhancements BZ# 1148734 This update enables Central Processors Assist for Cryptographic Functions (CPACF) Message Security Assist 4 (MSA-4) extensions with new modes of operation for opencryptoki on IBM System z. In addition, this hardware encryption improves performance on machines z196 and later. BZ# 11148133 This update also implements an opencryptoki token for access to the Enterprise PKCS#11 (EP11) features of the Crypto Express4S (CEX4S) adapter that implements certified PKCS#11 mechanism on IBM System z. Users of opencryptoki are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-opencryptoki
Chapter 10. Admission plugins
Chapter 10. Admission plugins Admission plugins are used to help regulate how OpenShift Container Platform functions. 10.1. About admission plugins Admission plugins intercept requests to the master API to validate resource requests. After a request is authenticated and authorized, the admission plugins ensure that any associated policies are followed. For example, they are commonly used to enforce security policy, resource limitations or configuration requirements. Admission plugins run in sequence as an admission chain. If any admission plugin in the sequence rejects a request, the whole chain is aborted and an error is returned. OpenShift Container Platform has a default set of admission plugins enabled for each resource type. These are required for proper functioning of the cluster. Admission plugins ignore resources that they are not responsible for. In addition to the defaults, the admission chain can be extended dynamically through webhook admission plugins that call out to custom webhook servers. There are two types of webhook admission plugins: a mutating admission plugin and a validating admission plugin. The mutating admission plugin runs first and can both modify resources and validate requests. The validating admission plugin validates requests and runs after the mutating admission plugin so that modifications triggered by the mutating admission plugin can also be validated. Calling webhook servers through a mutating admission plugin can produce side effects on resources related to the target object. In such situations, you must take steps to validate that the end result is as expected. Warning Dynamic admission should be used cautiously because it impacts cluster control plane operations. When calling webhook servers through webhook admission plugins in OpenShift Container Platform 4.13, ensure that you have read the documentation fully and tested for side effects of mutations. Include steps to restore resources back to their original state prior to mutation, in the event that a request does not pass through the entire admission chain. 10.2. Default admission plugins Default validating and admission plugins are enabled in OpenShift Container Platform 4.13. These default plugins contribute to fundamental control plane functionality, such as ingress policy, cluster resource limit override and quota policy. The following lists contain the default admission plugins: Example 10.1. Validating admission plugins LimitRanger ServiceAccount PodNodeSelector Priority PodTolerationRestriction OwnerReferencesPermissionEnforcement PersistentVolumeClaimResize RuntimeClass CertificateApproval CertificateSigning CertificateSubjectRestriction autoscaling.openshift.io/ManagementCPUsOverride authorization.openshift.io/RestrictSubjectBindings scheduling.openshift.io/OriginPodNodeEnvironment network.openshift.io/ExternalIPRanger network.openshift.io/RestrictedEndpointsAdmission image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/SCCExecRestrictions route.openshift.io/IngressAdmission config.openshift.io/ValidateAPIServer config.openshift.io/ValidateAuthentication config.openshift.io/ValidateFeatureGate config.openshift.io/ValidateConsole operator.openshift.io/ValidateDNS config.openshift.io/ValidateImage config.openshift.io/ValidateOAuth config.openshift.io/ValidateProject config.openshift.io/DenyDeleteClusterConfiguration config.openshift.io/ValidateScheduler quota.openshift.io/ValidateClusterResourceQuota security.openshift.io/ValidateSecurityContextConstraints authorization.openshift.io/ValidateRoleBindingRestriction config.openshift.io/ValidateNetwork operator.openshift.io/ValidateKubeControllerManager ValidatingAdmissionWebhook ResourceQuota quota.openshift.io/ClusterResourceQuota Example 10.2. Mutating admission plugins NamespaceLifecycle LimitRanger ServiceAccount NodeRestriction TaintNodesByCondition PodNodeSelector Priority DefaultTolerationSeconds PodTolerationRestriction DefaultStorageClass StorageObjectInUseProtection RuntimeClass DefaultIngressClass autoscaling.openshift.io/ManagementCPUsOverride scheduling.openshift.io/OriginPodNodeEnvironment image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/DefaultSecurityContextConstraints MutatingAdmissionWebhook 10.3. Webhook admission plugins In addition to OpenShift Container Platform default admission plugins, dynamic admission can be implemented through webhook admission plugins that call webhook servers, to extend the functionality of the admission chain. Webhook servers are called over HTTP at defined endpoints. There are two types of webhook admission plugins in OpenShift Container Platform: During the admission process, the mutating admission plugin can perform tasks, such as injecting affinity labels. At the end of the admission process, the validating admission plugin can be used to make sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, OpenShift Container Platform schedules the object as configured. When an API request comes in, mutating or validating admission plugins use the list of external webhooks in the configuration and call them in parallel: If all of the webhooks approve the request, the admission chain continues. If any of the webhooks deny the request, the admission request is denied and the reason for doing so is based on the first denial. If more than one webhook denies the admission request, only the first denial reason is returned to the user. If an error is encountered when calling a webhook, the request is either denied or the webhook is ignored depending on the error policy set. If the error policy is set to Ignore , the request is unconditionally accepted in the event of a failure. If the policy is set to Fail , failed requests are denied. Using Ignore can result in unpredictable behavior for all clients. Communication between the webhook admission plugin and the webhook server must use TLS. Generate a CA certificate and use the certificate to sign the server certificate that is used by your webhook admission server. The PEM-encoded CA certificate is supplied to the webhook admission plugin using a mechanism, such as service serving certificate secrets. The following diagram illustrates the sequential admission chain process within which multiple webhook servers are called. Figure 10.1. API admission chain with mutating and validating admission plugins An example webhook admission plugin use case is where all pods must have a common set of labels. In this example, the mutating admission plugin can inject labels and the validating admission plugin can check that labels are as expected. OpenShift Container Platform would subsequently schedule pods that include required labels and reject those that do not. Some common webhook admission plugin use cases include: Namespace reservation. Limiting custom network resources managed by the SR-IOV network device plugin. Defining tolerations that enable taints to qualify which pods should be scheduled on a node. Pod priority class validation. Note The maximum default webhook timeout value in OpenShift Container Platform is 13 seconds, and it cannot be changed. 10.4. Types of webhook admission plugins Cluster administrators can call out to webhook servers through the mutating admission plugin or the validating admission plugin in the API server admission chain. 10.4.1. Mutating admission plugin The mutating admission plugin is invoked during the mutation phase of the admission process, which allows modification of resource content before it is persisted. One example webhook that can be called through the mutating admission plugin is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification. Sample mutating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None 1 Specifies a mutating admission plugin configuration. 2 The name for the MutatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. Important In OpenShift Container Platform 4.13, objects created by users or control loops through a mutating admission plugin might return unexpected results, especially if values set in an initial request are overwritten, which is not recommended. 10.4.2. Validating admission plugin A validating admission plugin is invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a webhook which is called by the validating admission plugin, to ensure that all nodeSelector fields are constrained by the node selector restrictions on the namespace. Sample validating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown 1 Specifies a validating admission plugin configuration. 2 The name for the ValidatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. 10.5. Configuring dynamic admission This procedure outlines high-level steps to configure dynamic admission. The functionality of the admission chain is extended by configuring a webhook admission plugin to call out to a webhook server. The webhook server is also configured as an aggregated API server. This allows other OpenShift Container Platform components to communicate with the webhook using internal credentials and facilitates testing using the oc command. Additionally, this enables role based access control (RBAC) into the webhook and prevents token information from other API servers from being disclosed to the webhook. Prerequisites An OpenShift Container Platform account with cluster administrator access. The OpenShift Container Platform CLI ( oc ) installed. A published webhook server container image. Procedure Build a webhook server container image and make it available to the cluster using an image registry. Create a local CA key and certificate and use them to sign the webhook server's certificate signing request (CSR). Create a new project for webhook resources: USD oc new-project my-webhook-namespace 1 1 Note that the webhook server might expect a specific name. Define RBAC rules for the aggregated API service in a file called rbac.yaml : apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - "" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server 1 Delegates authentication and authorization to the webhook server API. 2 Allows the webhook server to access cluster resources. 3 Points to resources. This example points to the namespacereservations resource. 4 Enables the aggregated API server to create admission reviews. 5 Points to resources. This example points to the namespacereservations resource. 6 Enables the webhook server to access cluster resources. 7 Role binding to read the configuration for terminating authentication. 8 Default cluster role and cluster role bindings for an aggregated API server. Apply those RBAC rules to the cluster: USD oc auth reconcile -f rbac.yaml Create a YAML file called webhook-daemonset.yaml that is used to deploy a webhook as a daemon set server in a namespace: apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: "true" spec: selector: matchLabels: server: "true" template: metadata: name: server labels: server: "true" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert 1 Note that the webhook server might expect a specific container name. 2 Points to a webhook server container image. Replace <image_registry_username>/<image_path>:<tag> with the appropriate value. 3 Specifies webhook container run commands. Replace <container_commands> with the appropriate value. 4 Defines the target port within pods. This example uses port 8443. 5 Specifies the port used by the readiness probe. This example uses port 8443. Deploy the daemon set: USD oc apply -f webhook-daemonset.yaml Define a secret for the service serving certificate signer, within a YAML file called webhook-secret.yaml : apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2 1 References the signed webhook server certificate. Replace <server_certificate> with the appropriate certificate in base64 format. 2 References the signed webhook server key. Replace <server_key> with the appropriate key in base64 format. Create the secret: USD oc apply -f webhook-secret.yaml Define a service account and service, within a YAML file called webhook-service.yaml : apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: "true" ports: - port: 443 1 targetPort: 8443 2 1 Defines the port that the service listens on. This example uses port 443. 2 Defines the target port within pods that the service forwards connections to. This example uses port 8443. Expose the webhook server within the cluster: USD oc apply -f webhook-service.yaml Define a custom resource definition for the webhook server, in a file called webhook-crd.yaml : apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7 1 Reflects CustomResourceDefinition spec values and is in the format <plural>.<group> . This example uses the namespacereservations resource. 2 REST API group name. 3 REST API version name. 4 Accepted values are Namespaced or Cluster . 5 Plural name to be included in URL. 6 Alias seen in oc output. 7 The reference for resource manifests. Apply the custom resource definition: USD oc apply -f webhook-crd.yaml Configure the webhook server also as an aggregated API server, within a file called webhook-api-service.yaml : apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1 1 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. Deploy the aggregated API service: USD oc apply -f webhook-api-service.yaml Define the webhook admission plugin configuration within a file called webhook-config.yaml . This example uses the validating admission plugin: apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - "*" resources: - projectrequests - operations: - CREATE apiGroups: - "" apiVersions: - "*" resources: - namespaces failurePolicy: Fail 1 Name for the ValidatingWebhookConfiguration object. This example uses the namespacereservations resource. 2 Name of the webhook to call. This example uses the namespacereservations resource. 3 Enables access to the webhook server through the aggregated API. 4 The webhook URL used for admission requests. This example uses the namespacereservation resource. 5 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. Deploy the webhook: USD oc apply -f webhook-config.yaml Verify that the webhook is functioning as expected. For example, if you have configured dynamic admission to reserve specific namespaces, confirm that requests to create those namespaces are rejected and that requests to create non-reserved namespaces succeed. 10.6. Additional resources Limiting custom network resources managed by the SR-IOV network device plugin Defining tolerations that enable taints to qualify which pods should be scheduled on a node Pod priority class validation
[ "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown", "oc new-project my-webhook-namespace 1", "apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server", "oc auth reconcile -f rbac.yaml", "apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: \"true\" spec: selector: matchLabels: server: \"true\" template: metadata: name: server labels: server: \"true\" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert", "oc apply -f webhook-daemonset.yaml", "apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2", "oc apply -f webhook-secret.yaml", "apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: \"true\" ports: - port: 443 1 targetPort: 8443 2", "oc apply -f webhook-service.yaml", "apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7", "oc apply -f webhook-crd.yaml", "apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1", "oc apply -f webhook-api-service.yaml", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - \"*\" resources: - projectrequests - operations: - CREATE apiGroups: - \"\" apiVersions: - \"*\" resources: - namespaces failurePolicy: Fail", "oc apply -f webhook-config.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/architecture/admission-plug-ins
1.5. MAC Address Pools
1.5. MAC Address Pools MAC address pools define the range(s) of MAC addresses allocated for each cluster. A MAC address pool is specified for each cluster. By using MAC address pools, Red Hat Virtualization can automatically generate and assign MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. MAC address pools are more memory efficient when all MAC addresses related to a cluster are within the range for the assigned MAC address pool. The same MAC address pool can be shared by multiple clusters, but each cluster has a single MAC address pool assigned. A default MAC address pool is created by Red Hat Virtualization and is used if another MAC address pool is not assigned. For more information about assigning MAC address pools to clusters see Section 8.2.1, "Creating a New Cluster" . Note If more than one Red Hat Virtualization cluster shares a network, do not rely solely on the default MAC address pool because the virtual machines of each cluster will try to use the same range of MAC addresses, leading to conflicts. To avoid MAC address conflicts, check the MAC address pool ranges to ensure that each cluster is assigned a unique MAC address range. The MAC address pool assigns the available MAC address following the last address that was returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses defined in a single MAC address pool, the ranges take turns in serving incoming requests in the same way available MAC addresses are selected. 1.5.1. Creating MAC Address Pools You can create new MAC address pools. Creating a MAC Address Pool Click Administration Configure . Click the MAC Address Pools tab. Click Add . Enter the Name and Description of the new MAC address pool. Select the Allow Duplicates check box to allow a MAC address to be used multiple times in a pool. The MAC address pool will not automatically use a duplicate MAC address, but enabling the duplicates option means a user can manually use a duplicate MAC address. Note If one MAC address pool has duplicates disabled, and another has duplicates enabled, each MAC address can be used once in the pool with duplicates disabled but can be used multiple times in the pool with duplicates enabled. Enter the required MAC Address Ranges . To enter multiple ranges click the plus button to the From and To fields. Click OK . 1.5.2. Editing MAC Address Pools You can edit MAC address pools to change the details, including the range of MAC addresses available in the pool and whether duplicates are allowed. Editing MAC Address Pool Properties Click Administration Configure . Click the MAC Address Pools tab. Select the MAC address pool to be edited. Click Edit . Change the Name , Description , Allow Duplicates , and MAC Address Ranges fields as required. Note When a MAC address range is updated, the MAC addresses of existing NICs are not reassigned. MAC addresses that were already assigned, but are outside of the new MAC address range, are added as user-specified MAC addresses and are still tracked by that MAC address pool. Click OK . 1.5.3. Editing MAC Address Pool Permissions After a MAC address pool has been created, you can edit its user permissions. The user permissions control which data centers can use the MAC address pool. See Section 1.1, "Roles" for more information on adding new user permissions. Editing MAC Address Pool Permissions Click Administration Configure . Click the MAC Address Pools tab. Select the required MAC address pool. Edit the user permissions for the MAC address pool: To add user permissions to a MAC address pool: Click Add in the user permissions pane at the bottom of the Configure window. Search for and select the required users. Select the required role from the Role to Assign drop-down list. Click OK to add the user permissions. To remove user permissions from a MAC address pool: Select the user permission to be removed in the user permissions pane at the bottom of the Configure window. Click Remove to remove the user permissions. 1.5.4. Removing MAC Address Pools You can remove a created MAC address pool if the pool is not associated with a cluster, but the default MAC address pool cannot be removed. Removing a MAC Address Pool Click Administration Configure . Click the MAC Address Pools tab. Select the MAC address pool to be removed. Click the Remove . Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-MAC_Address_Pools
Chapter 6. Uninstalling OpenShift Data Foundation
Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_power/uninstalling_openshift_data_foundation
Chapter 128. Vert.x WebSocket
Chapter 128. Vert.x WebSocket Since Camel 3.5 Both producer and consumer are supported . The http://vertx.io/ Vertx] WebSocket component provides WebSocket capabilities as a WebSocket server, or as a client to connect to an existing WebSocket. 128.1. Dependencies When using vertx-websocket with Red Hat build of Camel Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-websocket-starter</artifactId> </dependency> 128.2. URI format 128.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 128.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 128.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 128.4. Component Options The Vert.x WebSocket component supports 11 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean allowOriginHeader (advanced) Whether the WebSocket client should add the Origin header to the WebSocket handshake request. true boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean defaultHost (advanced) Default value for host name that the WebSocket should bind to. 0.0.0.0 String defaultPort (advanced) Default value for the port that the WebSocket should bind to. 0 int originHeaderUrl (advanced) The value of the Origin header that the WebSocket client should use on the WebSocket handshake request. When not specified, the WebSocket client will automatically determine the value for the Origin from the request URL. String router (advanced) To provide a custom vertx router to use on the WebSocket server. Router vertx (advanced) To use an existing vertx instead of creating a new instance. Vertx vertxOptions (advanced) To provide a custom set of vertx options for configuring vertx. VertxOptions useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 128.5. Endpoint Options The Vert.x WebSocket endpoint is configured using URI syntax: with the following path and query parameters: 128.5.1. Path Parameters (3 parameters) Name Description Default Type host (common) Required WebSocket hostname, such as localhost or a remote host when in client mode. String port (common) Required WebSocket port number to use. int path (common) WebSocket path to use. String 128.5.2. Query Parameters (18 parameters) Name Description Default Type allowedOriginPattern (consumer) Regex pattern to match the origin header sent by WebSocket clients. String allowOriginHeader (consumer) Whether the WebSocket client should add the Origin header to the WebSocket handshake request. true boolean consumeAsClient (consumer) When set to true, the consumer acts as a WebSocket client, creating exchanges on each received WebSocket event. false boolean fireWebSocketConnectionEvents (consumer) Whether the server consumer will create a message exchange when a new WebSocket peer connects or disconnects. false boolean maxReconnectAttempts (consumer) When consumeAsClient is set to true this sets the maximum number of allowed reconnection attempts to a previously closed WebSocket. A value of 0 (the default) will attempt to reconnect indefinitely. 0 int originHeaderUrl (consumer) The value of the Origin header that the WebSocket client should use on the WebSocket handshake request. When not specified, the WebSocket client will automatically determine the value for the Origin from the request URL. String reconnectInitialDelay (consumer) When consumeAsClient is set to true this sets the initial delay in milliseconds before attempting to reconnect to a previously closed WebSocket. 0 int reconnectInterval (consumer) When consumeAsClient is set to true this sets the interval in milliseconds at which reconnecting to a previously closed WebSocket occurs. 1000 int router (consumer) To use an existing vertx router for the HTTP server. Router serverOptions (consumer) Sets customized options for configuring the HTTP server hosting the WebSocket for the consumer. HttpServerOptions bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern clientOptions (producer) Sets customized options for configuring the WebSocket client used in the producer. HttpClientOptions clientSubProtocols (producer) Comma separated list of WebSocket subprotocols that the client should use for the Sec-WebSocket-Protocol header. String sendToAll (producer) To send to all websocket subscribers. Can be used to configure at the endpoint level, instead of providing the VertxWebsocketConstants.SEND_TO_ALL header on the message. Note that when using this option, the host name specified for the vertx-websocket producer URI must match one used for an existing vertx-websocket consumer. Note that this option only applies when producing messages to endpoints hosted by the vertx-websocket consumer and not to an externally hosted WebSocket. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters 128.6. Message Headers The Vert.x WebSocket component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelVertxWebsocket.connectionKey (common) Constant: CONNECTION_KEY Sends the message to the client with the given connection key. You can use a comma separated list of keys to send a message to multiple clients. Note that this option only applies when producing messages to endpoints hosted by the vertx-websocket consumer and not to an externally hosted WebSocket. String CamelVertxWebsocket.sendToAll (producer) Constant: SEND_TO_ALL Sends the message to all clients which are currently connected. You can use the sendToAll option on the endpoint instead of using this header. Note that this option only applies when producing messages to endpoints hosted by the vertx-websocket consumer and not to an externally hosted WebSocket. boolean CamelVertxWebsocket.remoteAddress (consumer) Constant: REMOTE_ADDRESS The remote address. SocketAddress CamelVertxWebsocket.event (consumer) Constant: EVENT The WebSocket event that triggered the message exchange. Enum values: CLOSE ERROR MESSAGE OPEN VertxWebsocketEvent 128.7. Usage The following example shows how to expose a WebSocket on http://localhost:8080/echo and returns an 'echo' response back to the same channel: from("vertx-websocket:localhost:8080/echo") .transform().simple("Echo: USD{body}") .to("vertx-websocket:localhost:8080/echo"); It is also possible to configure the consumer to connect as a WebSocket client on a remote address with the consumeAsClient option: from("vertx-websocket:my.websocket.com:8080/chat?consumeAsClient=true") .log("Got WebSocket message USD{body}"); 128.8. Path and query parameters The WebSocket server consumer supports the configuration of parameterized paths. The path parameter value will be set as a Camel exchange header: from("vertx-websocket:localhost:8080/chat/{user}") .log("New message from USD{header.user} >>> USD{body}") You can also retrieve any query parameter values that were used by the WebSocket client to connect to the server endpoint: from("direct:sendChatMessage") .to("vertx-websocket:localhost:8080/chat/camel?role=admin"); from("vertx-websocket:localhost:8080/chat/{user}") .log("New message from USD{header.user} (USD{header.role}) >>> USD{body}") 128.9. Sending messages to peers connected to the vertx-websocket server consumer Note This section only applies when producing messages to a WebSocket hosted by the camel-vertx-websocket consumer. It is not relevant when producing messages to an externally hosted WebSocket. To send a message to all peers connected to a WebSocket hosted by the vertx-websocket server consumer, use the sendToAll=true endpoint option, or the CamelVertxWebsocket.sendToAll header. from("vertx-websocket:localhost:8080/chat") .log("Got WebSocket message USD{body}"); from("direct:broadcastMessage") .setBody().constant("This is a broadcast message!") .to("vertx-websocket:localhost:8080/chat?sendToAll=true"); Alternatively, you can send messages to specific peers by using the CamelVertxWebsocket.connectionKey header. Multiple peers can be specified as a comma separated list. The value of the connectionKey can be determined whenever a peer triggers an event on the vertx-websocket consumer, where a unique key identifying the peer will be propagated via the CamelVertxWebsocket.connectionKey header. from("vertx-websocket:localhost:8080/chat") .log("Got WebSocket message USD{body}"); from("direct:broadcastMessage") .setBody().constant("This is a broadcast message!") .setHeader(VertxWebsocketConstants.CONNECTION_KEY).constant("key-1,key-2,key-3") .to("vertx-websocket:localhost:8080/chat"); 128.10. SSL By default, the ws:// protocol is used, but secure connections with wss:// are supported by configuring the consumer or producer via the sslContextParameters URI parameter and the Camel JSSE Configuration Utility . 128.11. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.vertx-websocket.allow-origin-header Whether the WebSocket client should add the Origin header to the WebSocket handshake request. true Boolean camel.component.vertx-websocket.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.vertx-websocket.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.vertx-websocket.default-host Default value for host name that the WebSocket should bind to. 0.0.0.0 String camel.component.vertx-websocket.default-port Default value for the port that the WebSocket should bind to. 0 Integer camel.component.vertx-websocket.enabled Whether to enable auto configuration of the vertx-websocket component. This is enabled by default. Boolean camel.component.vertx-websocket.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.vertx-websocket.origin-header-url The value of the Origin header that the WebSocket client should use on the WebSocket handshake request. When not specified, the WebSocket client will automatically determine the value for the Origin from the request URL. String camel.component.vertx-websocket.router To provide a custom vertx router to use on the WebSocket server. The option is a io.vertx.ext.web.Router type. Router camel.component.vertx-websocket.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.vertx-websocket.vertx To use an existing vertx instead of creating a new instance. The option is a io.vertx.core.Vertx type. Vertx camel.component.vertx-websocket.vertx-options To provide a custom set of vertx options for configuring vertx. The option is a io.vertx.core.VertxOptions type. VertxOptions
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-websocket-starter</artifactId> </dependency>", "vertx-websocket://hostname[:port][/resourceUri][?options]", "vertx-websocket:host:port/path", "from(\"vertx-websocket:localhost:8080/echo\") .transform().simple(\"Echo: USD{body}\") .to(\"vertx-websocket:localhost:8080/echo\");", "from(\"vertx-websocket:my.websocket.com:8080/chat?consumeAsClient=true\") .log(\"Got WebSocket message USD{body}\");", "from(\"vertx-websocket:localhost:8080/chat/{user}\") .log(\"New message from USD{header.user} >>> USD{body}\")", "from(\"direct:sendChatMessage\") .to(\"vertx-websocket:localhost:8080/chat/camel?role=admin\"); from(\"vertx-websocket:localhost:8080/chat/{user}\") .log(\"New message from USD{header.user} (USD{header.role}) >>> USD{body}\")", "from(\"vertx-websocket:localhost:8080/chat\") .log(\"Got WebSocket message USD{body}\"); from(\"direct:broadcastMessage\") .setBody().constant(\"This is a broadcast message!\") .to(\"vertx-websocket:localhost:8080/chat?sendToAll=true\");", "from(\"vertx-websocket:localhost:8080/chat\") .log(\"Got WebSocket message USD{body}\"); from(\"direct:broadcastMessage\") .setBody().constant(\"This is a broadcast message!\") .setHeader(VertxWebsocketConstants.CONNECTION_KEY).constant(\"key-1,key-2,key-3\") .to(\"vertx-websocket:localhost:8080/chat\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-vertx-websocket-component-starter
Chapter 4. Plug-in Implemented Server Functionality Reference
Chapter 4. Plug-in Implemented Server Functionality Reference This chapter contains reference information on Red Hat Directory Server plug-ins. The configuration for each part of Directory Server plug-in functionality has its own separate entry and set of attributes under the subtree cn=plugins,cn=config . Some of these attributes are common to all plug-ins while others may be particular to a specific plug-in. Check which attributes are currently being used by a given plug-in by performing an ldapsearch on the cn=config subtree. All plug-ins are instances of the nsSlapdPlugin object class, which in turn inherits from the extensibleObject object class. For plug-in configuration attributes to be taken into account by the server, both of these object classes (in addition to the top object class) must be present in the entry, as shown in the following example: 4.1. Server Plug-in Functionality Reference The following tables provide a quick overview of the plug-ins provided with Directory Server, along with their configurable options, configurable arguments, default setting, dependencies, general performance-related information, and further reading. These tables assist in weighing plug-in performance gains and costs and choose the optimal settings for the deployment. The Further Information section cross-references further reading, where this is available. 4.1.1. 7-bit Check Plug-in Plug-in Parameter Description Plug-in ID NS7bitAtt DN of Configuration Entry cn=7-bit check,cn=plugins,cn=config Description Checks certain attributes are 7-bit clean Type preoperation Configurable Options on off Default Setting on Configurable Arguments List of attributes ( uid mail userpassword ) followed by "," and then suffixes on which the check is to occur. Dependencies Database Performance-Related Information None Further Information 4.1.2. ACL Plug-in Plug-in Parameter Description Plug-in ID acl DN of Configuration Entry cn=ACL Plugin,cn=plugins,cn=config Description ACL access check plug-in Type accesscontrol Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Access control incurs a minimal performance hit. Leave this plug-in enabled since it is the primary means of access control for the server. Further Information 4.1.3. ACL Preoperation Plug-in Plug-in Parameter Description Plug-in ID acl DN of Configuration Entry cn=ACL preoperation,cn=plugins,cn=config Description ACL access check plug-in Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Access control incurs a minimal performance hit. Leave this plug-in enabled since it is the primary means of access control for the server. Further Information 4.1.4. Account Policy Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Account Policy Plugin,cn=plugins,cn=config Description Defines a policy to lock user accounts after a certain expiration period or inactivity period. Type object Configurable Options on off Default Setting off Configurable Arguments A pointer to a configuration entry which contains the global account policy settings. Dependencies Database Performance-Related Information None Further Information 4.1.5. Account Usability Plug-in Plug-in Parameter Description Plug-in ID acctusability DN of Configuration Entry cn=Account Usability Plugin,cn=plugins,cn=config Description Checks the authentication status, or usability, of an account without actually authenticating as the given user Type preoperation Configurable Options on off Default Setting on Dependencies Database Performance-Related Information 4.1.6. AD DN Plug-in Plug-in Parameter Description Plug-in ID addn DN of Configuration Entry cn=addn,cn=plugins,cn=config Description Enables the usage of Active Directory-formatted user names, such as user_name and user_name @ domain , for bind operations. Type preoperation Configurable Options on off Default Setting off Configurable Arguments addn_default_domain : Sets the default domain that is automatically appended to user names without domain. Dependencies None Performance-Related Information 4.1.7. Attribute Uniqueness Plug-in Plug-in Parameter Description Plug-in ID NSUniqueAttr DN of Configuration Entry cn=Attribute Uniqueness,cn=plugins,cn=config Description Checks that the values of specified attributes are unique each time a modification occurs on an entry. For example, most sites require that a user ID and email address be unique. Type preoperation Configurable Options on off Default Setting off Configurable Arguments To check for UID attribute uniqueness in all listed subtrees, enter uid "DN" "DN"... . However, to check for UID attribute uniqueness when adding or updating entries with the requiredObjectClass , enter attribute="uid" MarkerObjectclass = "ObjectClassName" and, optionally requiredObjectClass = "ObjectClassName" . This starts checking for the required object classes from the parent entry containing the ObjectClass as defined by the MarkerObjectClass attribute. Dependencies Database Performance-Related Information Directory Server provides the UID Uniqueness Plug-in by default. To ensure unique values for other attributes, create instances of the Attribute Uniqueness Plug-in for those attributes. See the "Using the Attribute Uniqueness Plug-in" section in the Red Hat Directory Server Administration Guide for more information about the Attribute Uniqueness Plug-in. The UID Uniqueness Plug-in is off by default due to operation restrictions that need to be addressed before enabling the plug-in in a multi-supplier replication environment. Turning the plug-in on may slow down Directory Server performance. Further Information 4.1.8. Auto Membership Plug-in Plug-in Parameter Description Plug-in ID Auto Membership DN of Configuration Entry cn=Auto Membership,cn=plugins,cn=config Description Container entry for automember definitions. Automember definitions search new entries and, if they match defined LDAP search filters and regular expression conditions, add the entry to a specified group automatically. Type preoperation Configurable Options on off Default Setting off Configurable Arguments None for the main plug-in entry. The definition entry must specify an LDAP scope, LDAP filter, default group, and member attribute format. The optional regular expression child entry can specify inclusive and exclusive expressions and a different target group. Dependencies Database Performance-Related Information None. Further Information 4.1.9. Binary Syntax Plug-in Warning Binary syntax is deprecated. Use Octet String syntax instead. Plug-in Parameter Description Plug-in ID bin-syntax DN of Configuration Entry cn=Binary Syntax,cn=plugins,cn=config Description Syntax for handling binary data. Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.10. Bit String Syntax Plug-in Plug-in Parameter Description Plug-in ID bitstring-syntax DN of Configuration Entry cn=Bit String Syntax,cn=plugins,cn=config Description Supports bit string syntax values and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.11. Bitwise Plug-in Plug-in Parameter Description Plug-in ID bitwise DN of Configuration Entry cn=Bitwise Plugin,cn=plugins,cn=config Description Matching rule for performing bitwise operations against the LDAP server Type matchingrule Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.12. Boolean Syntax Plug-in Plug-in Parameter Description Plug-in ID boolean-syntax DN of Configuration Entry cn=Boolean Syntax,cn=plugins,cn=config Description Supports boolean syntax values (TRUE or FALSE) and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.13. Case Exact String Syntax Plug-in Plug-in Parameter Description Plug-in ID ces-syntax DN of Configuration Entry cn=Case Exact String Syntax,cn=plugins,cn=config Description Supports case-sensitive matching or Directory String, IA5 String, and related syntaxes. This is not a case-exact syntax; this plug-in provides case-sensitive matching rules for different string syntaxes. Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.14. Case Ignore String Syntax Plug-in Plug-in Parameter Description Plug-in ID directorystring-syntax DN of Configuration Entry cn=Case Ignore String Syntax,cn=plugins,cn=config Description Supports case-insensitive matching rules for Directory String, IA5 String, and related syntaxes. This is not a case-insensitive syntax; this plug-in provides case-sensitive matching rules for different string syntaxes. Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.15. Chaining Database Plug-in Plug-in Parameter Description Plug-in ID chaining database DN of Configuration Entry cn=Chaining database,cn=plugins,cn=config Description Enables back end databases to be linked Type database Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information There are many performance related tuning parameters involved with the chaining database. See the "Maintaining Database Links" section in the Red Hat Directory Server Administration Guide . Further Information 4.1.16. Class of Service Plug-in Plug-in Parameter Description Plug-in ID cos DN of Configuration Entry cn=Class of Service,cn=plugins,cn=config Description Allows for sharing of attributes between entries Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in * Named: Views Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Leave this plug-in running at all times. Further Information 4.1.17. Content Synchronization Plug-in Plug-in Parameter Description Plug-in ID content-sync-plugin DN of Configuration Entry cn=Content Synchronization,cn=plugins,cn=config Description Enables support for the SyncRepl protocol in Directory Server according to RFC 4533 . Type object Configurable Options on off Default Setting off Configurable Arguments None Dependencies Retro Changelog Plug-in Performance-Related Information If you know which back end or subtree clients access to synchronize data, limit the scope of the Retro Changelog plug-in accordingly. Further Information 4.1.18. Country String Syntax Plug-in Plug-in Parameter Description Plug-in ID countrystring-syntax DN of Configuration Entry cn=Country String Syntax,cn=plugins,cn=config Description Supports country naming syntax values and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.19. Delivery Method Syntax Plug-in Plug-in Parameter Description Plug-in ID delivery-syntax DN of Configuration Entry cn=Delivery Method Syntax,cn=plugins,cn=config Description Supports values that are lists of preferred deliver methods and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.20. deref Plug-in Plug-in Parameter Description Plug-in ID Dereference DN of Configuration Entry cn=deref,cn=plugins,cn=config Description For dereference controls in directory searches Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.21. Distinguished Name Syntax Plug-in Plug-in Parameter Description Plug-in ID dn-syntax DN of Configuration Entry cn=Distinguished Name Syntax,cn=plugins,cn=config Description Supports DN value syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.22. Distributed Numeric Assignment Plug-in Plug-in Information Description Plug-in ID Distributed Numeric Assignment Configuration Entry DN cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Description Distributed Numeric Assignment plugin Type preoperation Configurable Options on off Default Setting off Configurable Arguments Dependencies Database Performance-Related Information None Further Information 4.1.23. Enhanced Guide Syntax Plug-in Plug-in Parameter Description Plug-in ID enhancedguide-syntax DN of Configuration Entry cn=Enhanced Guide Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for creating complex criteria, based on attributes and filters, to build searches; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.24. Facsimile Telephone Number Syntax Plug-in Plug-in Parameter Description Plug-in ID facsimile-syntax DN of Configuration Entry cn=Facsimile Telephone Number Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for fax numbers; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.25. Fax Syntax Plug-in Plug-in Parameter Description Plug-in ID fax-syntax DN of Configuration Entry cn=Fax Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for storing images of faxed objects; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.26. Generalized Time Syntax Plug-in Plug-in Parameter Description Plug-in ID time-syntax DN of Configuration Entry cn=Generalized Time Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for dealing with dates, times and time zones; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.27. Guide Syntax Plug-in Warning This syntax is deprecated. Use Enhanced Guide syntax instead. Plug-in Parameter Description Plug-in ID guide-syntax DN of Configuration Entry cn=Guide Syntax,cn=plugins,cn=config Description Syntax for creating complex criteria, based on attributes and filters, to build searches Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.28. HTTP Client Plug-in Plug-in Parameter Description Plug-in ID http-client DN of Configuration Entry cn=HTTP Client,cn=plugins,cn=config Description HTTP client plug-in Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Further Information 4.1.29. Integer Syntax Plug-in Plug-in Parameter Description Plug-in ID int-syntax DN of Configuration Entry cn=Integer Syntax,cn=plugins,cn=config Description Supports integer syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.30. Internationalization Plug-in Plug-in Parameter Description Plug-in ID orderingrule DN of Configuration Entry cn=Internationalization Plugin,cn=plugins,cn=config Description Enables internationalized strings to be ordered in the directory Type matchingrule Configurable Options on off Default Setting on Configurable Arguments The Internationalization Plug-in has one argument, which must not be modified, which specifies the location of the /etc/dirsrv/config/slapd-collations.conf file. This file stores the collation orders and locales used by the Internationalization Plug-in. Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.31. JPEG Syntax Plug-in Plug-in Parameter Description Plug-in ID jpeg-syntax DN of Configuration Entry cn=JPEG Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for JPEG image data; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.32. ldbm database Plug-in Plug-in Parameter Description Plug-in ID ldbm-backend DN of Configuration Entry cn=ldbm database,cn=plugins,cn=config Description Implements local databases Type database Configurable Options Default Setting on Configurable Arguments None Dependencies * Syntax * matchingRule Performance-Related Information See Section 4.4, "Database Plug-in Attributes" for further information on database configuration. Further Information See the "Configuring Directory Databases" chapter in the Red Hat Directory Server Administration Guide . 4.1.33. Linked Attributes Plug-in Plug-in Parameter Description Plug-in ID Linked Attributes DN of Configuration Entry cn=Linked Attributes,cn=plugins,cn=config Description Container entry for linked-managed attribute configuration entries. Each configuration entry under the container links one attribute to another, so that when one entry is updated (such as a manager entry), then any entry associated with that entry (such as a custom directReports attribute) are automatically updated with a user-specified corresponding attribute. Type preoperation Configurable Options on off Default Setting off Configurable Arguments None for the main plug-in entry. Each plug-in instance has three possible attributes: * linkType, which sets the primary attribute for the plug-in to monitor * managedType, which sets the attribute which will be managed dynamically by the plug-in whenever the attribute in linkType is modified * linkScope, which restricts the plug-in activity to a specific subtree within the directory tree Dependencies Database Performance-Related Information Any attribute set in linkType must only allow values in a DN format. Any attribute set in managedType must be multi-valued. Further Information 4.1.34. Managed Entries Plug-in Plug-in Information Description Plug-in ID Managed Entries Configuration Entry DN cn=Managed Entries,cn=plugins,cn=config Description Container entry for automatically generated directory entries. Each configuration entry defines a target subtree and a template entry. When a matching entry in the target subtree is created, then the plug-in automatically creates a new, related entry based on the template. Type preoperation Configurable Options on off Default Setting off Configurable Arguments None for the main plug-in entry. Each plug-in instance has four possible attributes: * originScope, which sets the search base * originFilter, which sets the search base for matching entries * managedScope, which sets the subtree under which to create new managed entries * managedTemplate, which is the template entry used to create the managed entries Dependencies Database Performance-Related Information None Further Information 4.1.35. MemberOf Plug-in Plug-in Information Description Plug-in ID memberOf Configuration Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Description Manages the memberOf attribute on user entries, based on the member attributes in the group entry. Type postoperation Configurable Options on off Default Setting off Configurable Arguments * memberOfAttr sets the attribute to generate in people's entries to show their group membership. * memberOfGroupAttr sets the attribute to use to identify group member's DNs. Dependencies Database Performance-Related Information None Further Information 4.1.36. Multi-master Replication Plug-in Plug-in Parameter Description Plug-in ID replication-multimaster DN of Configuration Entry cn=Multimaster Replication plugin,cn=plugins,cn=config Description Enables replication between two current Directory Servers Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Named: ldbm database * Named: DES * Named: Class of Service Performance-Related Information Further Information 4.1.37. Name and Optional UID Syntax Plug-in Plug-in Parameter Description Plug-in ID nameoptuid-syntax DN of Configuration Entry cn=Name And Optional UID Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules to store and search for a DN with an optional unique ID; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.38. Numeric String Syntax Plug-in Plug-in Parameter Description Plug-in ID numstr-syntax DN of Configuration Entry cn=Numeric String Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for strings of numbers and spaces; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.39. Octet String Syntax Plug-in Note Use the Octet String syntax instead of Binary, which is deprecated. Plug-in Parameter Description Plug-in ID octetstring-syntax DN of Configuration Entry cn=Octet String Syntax,cn=plugins,cn=config Description Supports octet string syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.40. OID Syntax Plug-in Plug-in Parameter Description Plug-in ID oid-syntax DN of Configuration Entry cn=OID Syntax,cn=plugins,cn=config Description Supports object identifier (OID) syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.41. PAM Pass Through Auth Plug-in Plug-in Parameter Description Plug-in ID pam_passthruauth DN of Configuration Entry cn=PAM Pass Through Auth,cn=plugins,cn=config Description Enables pass-through authentication for PAM, meaning that a PAM service can use the Directory Server as its user authentication store. Type preoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies Database Performance-Related Information Further Information 4.1.42. Pass Through Authentication Plug-in Plug-in Parameter Description Plug-in ID passthruauth DN of Configuration Entry cn=Pass Through Authentication,cn=plugins,cn=config Description Enables pass-through authentication , the mechanism which allows one directory to consult another to authenticate bind requests. Type preoperation Configurable Options on off Default Setting off Configurable Arguments ldap://example.com:389/o=example Dependencies Database Performance-Related Information Pass-through authentication slows down bind requests a little because they have to make an extra hop to the remote server. See the "Using Pass-through Authentication" chapter in the Red Hat Directory Server Administration Guide . Further Information 4.1.43. Password Storage Schemes Directory Server implements the password storage schemes as plug-ins. However, the cn=Password Storage Schemes,cn=plugins,cn=config entry itself is just a container, not a plug-in entry. All password storage scheme plug-ins are stored as a subentry of this container. To display all password storage schemes plug-ins, enter: Warning Red Hat recommends not disabling the password scheme plug-ins nor to change the configurations of the plug-ins to prevent unpredictable authentication behavior. Strong Password Storage Schemes Red Hat recommends using only the following strong password storage schemes (strongest first): PBKDF2_SHA256 (default) The password-based key derivation function 2 (PBKDF2) was designed to expend resources to counter brute force attacks. PBKDF2 supports a variable number of iterations to apply the hashing algorithm. Higher iterations improve security but require more hardware resources. In Directory Server, the PBKDF2_SHA256 scheme is implemented using 30,000 iterations to apply the SHA256 algorithm. This value is hard-coded and will be increased in future versions of Directory Server without requiring interaction by an administrator. Note The network security service (NSS) database in Red Hat Enterprise Linux 6 does not support PBKDF2. Therefore you cannot use this password scheme in a replication topology with Directory Server 9. SSHA512 The salted secure hashing algorithm (SSHA) implements an enhanced version of the secure hashing algorithm (SHA), that uses a randomly generated salt to increase the security of the hashed password. SSHA512 implements the hashing algorithm using 512 bits. Weak Password Storage Schemes Besides the recommended strong password storage schemes, Directory Server supports the following weak schemes for backward compatibility: AES CLEAR CRYPT CRYPT-MD5 CRYPT-SHA256 CRYPT-SHA512 DES MD5 NS-MTA-MD5 [a] SHA [b] SHA256 SHA384 SHA512 SMD5 SSHA SSHA256 SSHA384 [a] Directory Server only supports authentication using this scheme. You can no longer use it to encrypt passwords. [b] 160 bit Important Only continue using a weak scheme over a short time frame, as it increases security risks. 4.1.44. Posix Winsync API Plug-in Plug-in Parameter Description Plug-in ID posix-winsync-plugin DN of Configuration Entry cn=Posix Winsync API,cn=plugins,cn=config Description Enables and configures Windows synchronization for Posix attributes set on Active Directory user and group entries. Type preoperation Configurable Arguments * on off * memberUID mapping (groups) * converting and sorting memberUID values in lower case (groups) * memberOf fix-up tasks with sync operations * use Windows 2003 Posix schema Default Setting off Configurable Arguments None Dependencies 4.1.45. Postal Address String Syntax Plug-in Plug-in Parameter Description Plug-in ID postaladdress-syntax DN of Configuration Entry cn=Postal Address Syntax,cn=plugins,cn=config Description Supports postal address syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.46. Printable String Syntax Plug-in Plug-in Parameter Description Plug-in ID printablestring-syntax DN of Configuration Entry cn=Printable String Syntax,cn=plugins,cn=config Description Supports syntaxes and matching rules for alphanumeric and select punctuation strings (for strings which conform to printable strings as defined in RFC 4517 ). Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.47. Referential Integrity Postoperation Plug-in Plug-in Parameter Description Plug-in ID referint DN of Configuration Entry cn=Referential Integrity Postoperation,cn=plugins,cn=config Description Enables the server to ensure referential integrity Type postoperation Configurable Options All configuration and on off Default Setting off Configurable Arguments When enabled, the post-operation Referential Integrity Plug-in performs integrity updates on the member , uniquemember , owner , and seeAlso attributes immediately after a delete or rename operation. The plug-in can be configured to perform integrity checks on all other attributes. For details, see the corresponding section in the Directory Server Administration Guide . Dependencies Database Performance-Related Information The Referential Integrity Plug-in should be enabled only on one supplier in a multi-supplier replication environment to avoid conflict resolution loops. When enabling the plug-in on chained servers, be sure to analyze the performance resource and time needs as well as integrity needs; integrity checks can be time consuming and demanding on memory and CPU. All attributes specified must be indexed for both presence and equality. Further Information 4.1.48. Retro Changelog Plug-in Plug-in Parameter Description Plug-in ID retrocl DN of Configuration Entry cn=Retro Changelog Plugin,cn=plugins,cn=config Description Used by LDAP clients for maintaining application compatibility with Directory Server 4.x versions. Maintains a log of all changes occurring in the Directory Server. The retro changelog offers the same functionality as the changelog in the 4.x versions of Directory Server. This plug-in exposes the cn=changelog suffix to clients, so that clients can use this suffix with or without persistent search for simple sync applications. Type object Configurable Options on off Default Setting off Configurable Arguments See Section 4.16, "Retro Changelog Plug-in Attributes" for further information on the two configuration attributes for this plug-in. Dependencies * Type: Database * Named: Class of Service Performance-Related Information May slow down Directory Server update performance. Further Information 4.1.49. Roles Plug-in Plug-in Parameter Description Plug-in ID roles DN of Configuration Entry cn=Roles Plugin,cn=plugins,cn=config Description Enables the use of roles in the Directory Server Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in * Named: Views Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.50. RootDN Access Control Plug-in Plug-in Parameter Description Plug-in ID rootdn-access-control DN of Configuration Entry cn=RootDN Access Control,cn=plugins,cn=config Description Enables and configures access controls to use for the root DN entry. Type internalpreoperation Configurable Options on off Default Setting off Configurable Attributes * rootdn-open-time and rootdn-close-time for time-based access controls * rootdn-days-allowed for day-based access controls * rootdn-allow-host, rootdn-deny-host, rootdn-allow-ip, and rootdn-deny-ip for host-based access controls Dependencies None Further Information 4.1.51. Schema Reload Plug-in Plug-in Information Description Plug-in ID schemareload Configuration Entry DN cn=Schema Reload,cn=plugins,cn=config Description Task plug-in to reload schema files Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Further Information 4.1.52. Space Insensitive String Syntax Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Space Insensitive String Syntax,cn=plugins,cn=config Description Syntax for handling space-insensitive values Type syntax Configurable Options on off Default Setting off Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.53. State Change Plug-in Plug-in Parameter Description Plug-in ID statechange DN of Configuration Entry cn=State Change Plugin,cn=plugins,cn=config Description Enables state-change-notification service Type postoperation Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Further Information 4.1.54. Syntax Validation Task Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=Syntax Validation Task,cn=plugins,cn=config Description Enables syntax validation for attribute values Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Further Information 4.1.55. Telephone Syntax Plug-in Plug-in Parameter Description Plug-in ID tele-syntax DN of Configuration Entry cn=Telephone Syntax,cn=plugins,cn=config Description Supports telephone number syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.56. Teletex Terminal Identifier Syntax Plug-in Plug-in Parameter Description Plug-in ID teletextermid-syntax DN of Configuration Entry cn=Teletex Terminal Identifier Syntax,cn=plugins,cn=config Description Supports international telephone number syntaxes and related matching rules from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.57. Telex Number Syntax Plug-in Plug-in Parameter Description Plug-in ID telex-syntax DN of Configuration Entry cn=Telex Number Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for the telex number, country code, and answerback code of a telex terminal; from RFC 4517 . Type syntax Configurable Options on off Default Setting on Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.58. URI Syntax Plug-in Plug-in Parameter Description Plug-in ID none DN of Configuration Entry cn=URI Syntax,cn=plugins,cn=config Description Supports syntaxes and related matching rules for unique resource identifiers (URIs), including unique resource locators (URLs); from RFC 4517 . Type syntax Configurable Options on off Default Setting off Configurable Arguments None Dependencies None Performance-Related Information Do not modify the configuration of this plug-in. If enabled, Red Hat recommends leaving this plug-in running at all times. Further Information 4.1.59. USN Plug-in Plug-in Parameter Description Plug-in ID USN DN of Configuration Entry cn=USN,cn=plugins,cn=config Description Sets an update sequence number (USN) on an entry, for every entry in the directory, whenever there is a modification, including adding and deleting entries and modifying attribute values. Type object Configurable Options on off Default Setting off Configurable Arguments None Dependencies Database Performance-Related Information For replication, it is recommended that the entryUSN configuration attribute be excluded using fractional replication. Further Information 4.1.60. Views Plug-in Plug-in Parameter Description Plug-in ID views DN of Configuration Entry cn=Views,cn=plugins,cn=config Description Enables the use of views in the Directory Server databases. Type object Configurable Options on off Default Setting on Configurable Arguments None Dependencies * Type: Database * Named: State Change Plug-in Performance-Related Information Do not modify the configuration of this plug-in. Red Hat recommends leaving this plug-in running at all times. Further Information 4.2. List of Attributes Common to All Plug-ins This list provides a brief attribute description, the entry DN, valid range, default value, syntax, and an example for each attribute. 4.2.1. nsslapdPlugin (Object Class) Each Directory Server plug-in belongs to the nsslapdPlugin object class. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.41 Table 4.1. Required Attributes Attribute Definition objectClass Gives the object classes assigned to the entry. cn Gives the common name of the entry. Section 4.2.8, "nsslapd-pluginPath" Identifies the plugin library name (without the library suffix). Section 4.2.7, "nsslapd-pluginInitfunc" Identifies an initialization function of the plugin. Section 4.2.10, "nsslapd-pluginType" Identifies the type of plugin. Section 4.2.6, "nsslapd-pluginId" Identifies the plugin ID. Section 4.2.12, "nsslapd-pluginVersion" Identifies the version of plugin. Section 4.2.11, "nsslapd-pluginVendor" Identifies the vendor of plugin. Section 4.2.4, "nsslapd-pluginDescription" Identifies the description of the plugin. Section 4.2.5, "nsslapd-pluginEnabled" Identifies whether or not the plugin is enabled. Section 4.2.9, "nsslapd-pluginPrecedence" Sets the priority for the plug-in in the execution order. 4.2.2. nsslapd-logAccess This attribute enables you to log search operations run by the plug-in to the file set in the nsslapd-accesslog parameter in cn=config . Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-logAccess: Off 4.2.3. nsslapd-logAudit This attribute enables you to log and audit modifications to the database originated from the plug-in. Successful modification events are logged in the audit log, if the nsslapd-auditlog-logging-enabled parameter is enabled in cn=config . To log failed modification database operations by a plug-in, enable the nsslapd-auditfaillog-logging-enabled attribute in cn=config . Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-logAudit: Off 4.2.4. nsslapd-pluginDescription This attribute provides a description of the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Default Value None Syntax DirectoryString Example nsslapd-pluginDescription: acl access check plug-in 4.2.5. nsslapd-pluginEnabled This attribute specifies whether the plug-in is enabled. This attribute can be changed over protocol but will only take effect when the server is restarted. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-pluginEnabled: on 4.2.6. nsslapd-pluginId This attribute specifies the plug-in ID. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in ID Default Value None Syntax DirectoryString Example nsslapd-pluginId: chaining database 4.2.7. nsslapd-pluginInitfunc This attribute specifies the plug-in function to be initiated. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in function Default Value None Syntax DirectoryString Example nsslapd-pluginInitfunc: NS7bitAttr_Init 4.2.8. nsslapd-pluginPath This attribute specifies the full path to the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid path Default Value None Syntax DirectoryString Example nsslapd-pluginPath: uid-plugin 4.2.9. nsslapd-pluginPrecedence This attribute sets the precedence or priority for the execution order of a plug-in. Precedence defines the execution order of plug-ins, which allows more complex environments or interactions since it can enable a plug-in to wait for a completed operation before being executed. This is more important for pre-operation and post-operation plug-ins. Plug-ins with a value of 1 have the highest priority and are run first; plug-ins with a value of 99 have the lowest priority. The default is 50. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values 1 to 99 Default Value 50 Syntax Integer Example nsslapd-pluginPrecedence: 3 4.2.10. nsslapd-pluginType This attribute specifies the plug-in type. See Section 4.3.5, "nsslapd-plugin-depends-on-type" for further information. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in type Default Value None Syntax DirectoryString Example nsslapd-pluginType: preoperation 4.2.11. nsslapd-pluginVendor This attribute specifies the vendor of the plug-in. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any approved plug-in vendor Default Value Red Hat, Inc. Syntax DirectoryString Example nsslapd-pluginVendor: Red Hat, Inc. 4.2.12. nsslapd-pluginVersion This attribute specifies the plug-in version. Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values Any valid plug-in version Default Value Product version number Syntax DirectoryString Example nsslapd-pluginVersion: 11.3 4.3. Attributes Allowed by Certain Plug-ins 4.3.1. nsslapd-dynamic-plugins Directory Server has dynamic plug-ins that can be enabled without restarting the server. The nsslapd-dynamic-plugins attribute specifies whether the server is configured to allow dynamic plug-ins. By default, dynamic plug-ins are disabled. Warning Directory Server does not support dynamic plug-ins. Use it only for testing and debugging purposes. Some plug-ins cannot be configured as dynamic, and they require the server to be restarted. Plug-in Parameter Description Entry DN cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-dynamic-plugins: on 4.3.2. nsslapd-pluginConfigArea Some plug-in entries are container entries, and multiple instances of the plug-in are created beneath this container in cn=plugins,cn=config . However, the cn=plugins,cn=config is not replicated, which means that the plug-in configurations beneath those container entries must be configured manually, in some way, on every Directory Server instance. The nsslapd-pluginConfigArea attribute points to another container entry, in the main database area, which contains the plug-in instance entries. This container entry can be in a replicated database, which allows the plug-in configuration to be replicated. Plug-in Parameter Description Entry DN cn= plug-in name ,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DN Example nsslapd-pluginConfigArea: cn=managed entries container,ou=containers,dc=example,dc=com 4.3.3. nsslapd-pluginLoadNow This attribute specifies whether to load all of the symbols used by a plug-in immediately ( true ), as well as all symbols references by those symbols, or to load the symbol the first time it is used ( false ). Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values true | false Default Value false Syntax DirectoryString Example nsslapd-pluginLoadNow: false 4.3.4. nsslapd-pluginLoadGlobal This attribute specifies whether the symbols in dependent libraries are made visible locally ( false ) or to the executable and to all shared objects ( true ). Plug-in Parameter Description Entry DN cn=plug-in name,cn=plugins,cn=config Valid Values true | false Default Value false Syntax DirectoryString Example nsslapd-pluginLoadGlobal: false 4.3.5. nsslapd-plugin-depends-on-type Multi-valued attribute used to ensure that plug-ins are called by the server in the correct order. Takes a value which corresponds to the type number of a plug-in, contained in the attribute nsslapd-pluginType . See Section 4.2.10, "nsslapd-pluginType" for further information. All plug-ins with a type value which matches one of the values in the following valid range will be started by the server prior to this plug-in. The following postoperation Referential Integrity Plug-in example shows that the database plug-in will be started prior to the postoperation Referential Integrity Plug-in. Plug-in Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values database Default Value Syntax DirectoryString Example nsslapd-plugin-depends-on-type: database 4.3.6. nsslapd-plugin-depends-on-named Multi-valued attribute used to ensure that plug-ins are called by the server in the correct order. Takes a value which corresponds to the cn value of a plug-in. The plug-in with a cn value matching one of the following values will be started by the server prior to this plug-in. If the plug-in does not exist, the server fails to start. The following postoperation Referential Integrity Plug-in example shows that the Views plug-in is started before Roles. If Views is missing, the server is not going to start. Plug-in Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values Class of Service Default Value Syntax DirectoryString Example * nsslapd-plugin-depends-on-named: Views * nsslapd-pluginId: roles 4.4. Database Plug-in Attributes The database plug-in is also organized in an information tree, as shown in Figure 4.1, "Database Plug-in" . Figure 4.1. Database Plug-in All plug-in technology used by the database instances is stored in the cn=ldbm database plug-in node. This section presents the additional attribute information for each of the nodes in bold in the cn=ldbm database,cn=plugins,cn=config information tree. 4.4.1. Database Attributes under cn=config,cn=ldbm database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=config,cn=ldbm database,cn=plugins,cn=config tree node. 4.4.1.1. nsslapd-backend-implement The nsslapd-backend-implement parameter defines the database back end Directory Server uses. Important Directory Server currently only supports the Berkeley Database (BDB). Therefore, you cannot set this parameter to a different value. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values bdb Default Value bdb Syntax Directory String Example nsslapd-backend-implement: bdb 4.4.1.2. nsslapd-backend-opt-level This parameter can trigger experimental code to improve write performance. Possible values: 0 : Disables the parameter. 1 : The replication update vector is not written to the database during the transaction 2 : Changes the order of taking the back end lock and starts the transaction 4 : Moves code out of the transaction. All parameters can be combined. For example 7 enables all optimisation features. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 | 1 | 2 | 4 Default Value 0 Syntax Integer Example nsslapd-backend-opt-level: 0 4.4.1.3. nsslapd-directory This attribute specifies absolute path to database instance. If the database instance is manually created then this attribute must be included, something which is set by default (and modifiable) in the Directory Server Console. Once the database instance is created, do not modify this path as any changes risk preventing the server from accessing data. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid absolute path to the database instance Default Value Syntax DirectoryString Example nsslapd-directory: /var/lib/dirsrv/slapd- instance /db 4.4.1.4. nsslapd-exclude-from-export This attribute contains a space-separated list of names of attributes to exclude from an entry when a database is exported. This mainly is used for some configuration and operational attributes which are specific to a server instance. Do not remove any of the default values for this attribute, since that may affect server performance. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid attribute Default Value entrydn entryid dncomp parentid numSubordinates entryusn Syntax DirectoryString Example nsslapd-exclude-from-export: entrydn entryid dncomp parentid numSubordinates entryusn 4.4.1.5. nsslapd-db-transaction-wait If you enable the nsslapd-db-transaction-wait parameter, Directory Server does not start the transaction and waits until lock resources are available. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-transaction-wait: off 4.4.1.6. nsslapd-db-private-import-mem The nsslapd-db-private-import-mem parameter manages whether or not Directory Server uses private memory for allocation of regions and mutexes for a database import. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-private-import-mem: on 4.4.1.7. nsslapd-db-deadlock-policy The nsslapd-db-deadlock-policy parameter sets the libdb library-internal deadlock policy. Important Only change this parameter if instructed by Red Hat Support. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0-9 Default Value 0 Syntax DirectoryString Example nsslapd-db-deadlock-policy: 9 4.4.1.8. nsslapd-idl-switch The nsslapd-idl-switch parameter sets the IDL format Directory Server uses. Note that Red Hat no longer supports the old IDL format. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values new | old Default Value new Syntax Directory String Example nsslapd-idl-switch: new 4.4.1.9. nsslapd-idlistscanlimit This performance-related attribute, present by default, specifies the number of entry IDs that are searched during a search operation. Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message, with additional error information explaining the problem. It is advisable to keep the default value to improve search performance. For further details, see the corresponding sections in the: Directory Server Performance Tuning Guide Directory Server Administration Guide This parameter can be changed while the server is running, and the new value will affect subsequent searches. The corresponding user-level attribute is nsIDListScanLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 100 to the maximum 32-bit integer value (2147483647) entry IDs Default Value 4000 Syntax Integer Example nsslapd-idlistscanlimit: 4000 4.4.1.10. nsslapd-lookthroughlimit This performance-related attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries in response to a search request. The Directory Manager DN, however, is, by default, unlimited and overrides any other settings specified here. It is worth noting that binder-based resource limits work for this limit, which means that if a value for the operational attribute nsLookThroughLimit is present in the entry as which a user binds, the default limit will be overridden. Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 5000 Syntax Integer Example nsslapd-lookthroughlimit: 5000 4.4.1.11. nsslapd-mode This attribute specifies the permissions used for newly created index files. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any four-digit octal number. However, mode 0600 is recommended. This allows read and write access for the owner of the index files (which is the user as whom the ns-slapd runs) and no access for other users. Default Value 600 Syntax Integer Example nsslapd-mode: 0600 4.4.1.12. nsslapd-pagedidlistscanlimit This performance-related attribute specifies the number of entry IDs that are searched, specifically, for a search operation using the simple paged results control. This attribute works the same as the nsslapd-idlistscanlimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsslapd-idlistscanlimit is used to paged searches as well as non-paged searches. The corresponding user-level attribute is nsPagedIDListScanLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 0 Syntax Integer Example nsslapd-pagedidlistscanlimit: 5000 4.4.1.13. nsslapd-pagedlookthroughlimit This performance-related attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries for a search which uses the simple paged results control. This attribute works the same as the nsslapd-lookthroughlimit attribute, except that it only applies to searches with the simple paged results control. If this attribute is not present or is set to zero, then the nsslapd-lookthroughlimit is used to paged searches as well as non-paged searches. The corresponding user-level attribute is nsPagedLookThroughLimit . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 0 Syntax Integer Example nsslapd-pagedlookthroughlimit: 25000 4.4.1.14. nsslapd-rangelookthroughlimit This performance-related attribute specifies the maximum number of entries that the Directory Server will check when examining candidate entries in response to a range search request. Range searches use operators to set a bracket to search for and return an entire subset of entries within the directory. For example, this searches for every entry modified at or after midnight on January 1: The nature of a range search is that it must evaluate every single entry within the directory to see if it is within the range given. Essentially, a range search is always an all IDs search. For most users, the look-through limit kicks in and prevents range searches from turning into an all IDs search. This improves overall performance and speeds up range search results. However, some clients or administrative users like Directory Manager may not have a look-through limit set. In that case, a range search can take several minutes to complete or even continue indefinitely. The nsslapd-rangelookthroughlimit attribute sets a separate range look-through limit that applies to all users, including Directory Manager. This allows clients and administrative users to have high look-through limits while still allowing a reasonable limit to be set on potentially performance-impaired range searches. Note Unlike other resource limits, this applies to searches by any user, including the Directory Manager, regular users, and other LDAP clients. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer in entries (where -1 is unlimited) Default Value 5000 Syntax Integer Example nsslapd-rangelookthroughlimit: 5000 4.4.1.15. nsslapd-search-bypass-filter-test If you enable the nsslapd-search-bypass-filter-test parameter, Directory Server bypasses filter checks when it builds candidate lists during a search. If you set the parameter to verify , Directory Server evaluates the filter against the search candidate entries. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off | verify Default Value on Syntax Directory String Example nsslapd-search-bypass-filter-test: on 4.4.1.16. nsslapd-search-use-vlv-index The nsslapd-search-use-vlv-index enables and disables virtual list view (VLV) searches. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax Directory String Example nsslapd-search-use-vlv-index: on 4.4.1.17. nsslapd-subtree-rename-switch Every directory entry is stored as a key in an entry index file. The index key maps the current entry DN to its meta entry in the index. This mapping is done either by the RDN of the entry or by the full DN of the entry. When a subtree entry is allowed to be renamed (meaning, an entry with children entries, effectively renaming the whole subtree), its entries are stored in the entryrdn.db index, which associates parent and child entries by an assigned ID rather than their DN. If subtree rename operations are not allowed, then the entryrdn.db index is disabled and the entrydn.db index is used, which simply uses full DNs, with the implicit parent-child relationships. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values off | on Default Value on Syntax DirectoryString Example nsslapd-subtree-rename-switch: on 4.4.2. Database Attributes under cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config tree node. 4.4.2.1. nsslapd-cache-autosize This performance tuning-related attribute sets the percentage of free memory that is used in total for the database and entry cache. For example, if the value is set to 10 , 10% of the system's free RAM is used for both caches. If this value is set to a value greater than 0 , auto-sizing is enabled for the database and entry cache. For optimized performance, Red Hat recommends not to disable auto-sizing. However, in certain situations in can be necessary to disable auto-sizing. In this case, set the nsslapd-cache-autosize attribute to 0 and manually set: the database cache in the nsslapd-dbcachesize attribute. the entry cache in the nsslapd-cachememsize attribute. For further details about auto-sizing, see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . Note If the nsslapd-cache-autosize and nsslapd-cache-autosize-split attribute are both set to high values, such as 100 , Directory Server fails to start. To fix the problem, set both parameters to more reasonable values. For example: Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 100. If 0 is set, the default value is used instead. Default Value 10 Syntax Integer Example nsslapd-cache-autosize: 10 4.4.2.2. nsslapd-cache-autosize-split This performance tuning-related attribute sets the percentage of RAM that is used for the database cache. The remaining percentage is used for the entry cache. For example, if the value is set to 40 , the database cache uses 40%, and the entry cache the remaining 60% of the free RAM reserved in the nsslapd-cache-autosize attribute. For further details about auto-sizing, see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . Note If the nsslapd-cache-autosize and nsslapd-cache-autosize-split attribute are both set to high values, such as 100 , Directory Server fails to start. To fix the problem, set both parameters to more reasonable values. For example: Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 99. If 0 is set, the default value is used instead. Default Value 40 Syntax Integer Example nsslapd-cache-autosize-split: 40 4.4.2.3. nsslapd-db-checkpoint-interval This sets the amount of time in seconds after which the Directory Server sends a checkpoint entry to the database transaction log. The database transaction log contains a sequential listing of all recent database operations and is used for database recovery only. A checkpoint entry indicates which database operations have been physically written to the directory database. The checkpoint entries are used to determine where in the database transaction log to begin recovery after a system failure. The nsslapd-db-checkpoint-interval attribute is absent from dse.ldif . To change the checkpoint interval, add the attribute to dse.ldif . This attribute can be dynamically modified using ldapmodify . For further information on modifying this attribute, see the "Tuning Directory Server Performance" chapter in the Red Hat Directory Server Administration Guide . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat Technical Support or Red Hat Consulting. Inconsistent settings of this attribute and other configuration attributes may cause the Directory Server to be unstable. For more information on database transaction logging, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 10 to 300 seconds Default Value 60 Syntax Integer Example nsslapd-db-checkpoint-interval: 120 4.4.2.4. nsslapd-db-circular-logging This attribute specifies circular logging for the transaction log files. If this attribute is switched off, old transaction log files are not removed and are kept renamed as old log transaction files. Turning circular logging off can severely degrade server performance and, as such, should only be modified with the guidance of Red Hat Technical Support or Red Hat Consulting. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-circular-logging: on 4.4.2.5. nsslapd-db-compactdb-interval The nsslapd-db-compactdb-interval attribute defines the interval in seconds when Directory Server compacts the databases and replication changelogs. The compact operation returns the unused pages to the file system and the database file size shrinks. Note that compacting the database is resource-intensive and should not be done too often. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 (no compaction) to 2147483647 second Default Value 2592000 (30 days) Syntax Integer Example nsslapd-db-compactdb-interval: 2592000 4.4.2.6. nsslapd-db-compactdb-time The nsslapd-db-compactdb-time attribute sets the time of the day when Directory Server compacts all databases and their replication changelogs. The compaction task runs after the compaction interval ( nsslapd-db-compactdb-interval ) has been exceeded. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values HH:MM. Time is set in 24-hour format Default Value 23:59 Syntax DirectoryString Example nsslapd-db-compactdb-time: 23:59 4.4.2.7. nsslapd-db-debug This attribute specifies whether additional error information is to be reported to {Directory Server}. To report error information, set the parameter to on . This parameter is meant for troubleshooting; enabling the parameter may slow down the Directory Server. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-debug: off 4.4.2.8. nsslapd-db-durable-transactions This attribute sets whether database transaction log entries are immediately written to the disk. The database transaction log contains a sequential listing of all recent database operations and is used for database recovery only. With durable transactions enabled, every directory change will always be physically recorded in the log file and, therefore, able to be recovered in the event of a system failure. However, the durable transactions feature may also slow the performance of the Directory Server. When durable transactions is disabled, all transactions are logically written to the database transaction log but may not be physically written to disk immediately. If there were a system failure before a directory change was physically written to disk, that change would not be recoverable. The nsslapd-db-durable-transactions attribute is absent from dse.ldif . To disable durable transactions, add the attribute to dse.ldif . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat Technical Support or Red Hat Consulting. Inconsistent settings of this attribute and other configuration attributes may cause the Directory Server to be unstable. For more information on database transaction logging, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-durable-transactions: on 4.4.2.9. nsslapd-db-home-directory To move the database to another physical location for performance reasons, use this parameter to specify the home directory. This situation will occur only for certain combinations of the database cache size, the size of physical memory, and kernel tuning attributes. In particular, this situation should not occur if the database cache size is less than 100 megabytes. The disk is heavily used (more than 1 megabyte per second of data transfer). There is a long service time (more than 100ms). There is mostly write activity. If these are all true, use the nsslapd-db-home-directory attribute to specify a subdirectory of a tempfs type filesystem. The directory referenced by the nsslapd-db-home-directory attribute must be a subdirectory of a filesystem of type tempfs (such as /tmp ). However, Directory Server does not create the subdirectory referenced by this attribute. This directory must be created either manually or by using a script. Failure to create the directory referenced by the nsslapd-db-home-directory attribute will result in Directory Server being unable to start. Also, if there are multiple Directory Servers on the same machine, their nsslapd-db-home-directory attributes must be configured with different directories. Failure to do so will result in the databases for both directories becoming corrupted. The use of this attribute causes internal Directory Server database files to be moved to the directory referenced by the attribute. It is possible, but unlikely, that the server will no longer start after the files have been moved because not enough memory can be allocated. This is a symptom of an overly large database cache size being configured for the server. If this happens, reduce the size of the database cache size to a value where the server will start again. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid directory name in a tempfs filesystem, such as /tmp Default Value Syntax DirectoryString Example nsslapd-db-home-directory: /tmp/slapd-phonebook 4.4.2.10. nsslapd-db-idl-divisor This attribute specifies the index block size in terms of the number of blocks per database page. The block size is calculated by dividing the database page size by the value of this attribute. A value of 1 makes the block size exactly equal to the page size. The default value of 0 sets the block size to the page size minus an estimated allowance for internal database overhead. For the majority of installations, the default value should not be changed unless there are specific tuning needs. Before modifying the value of this attribute, export all databases using the db2ldif script. Once the modification has been made, reload the databases using the ldif2db script. Warning This parameter should only be used by very advanced users. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 8 Default Value 0 Syntax Integer Example nsslapd-db-idl-divisor: 2 4.4.2.11. nsslapd-db-locks Lock mechanisms in Directory Server control how many copies of Directory Server processes can run at the same time. The nsslapd-db-locks parameter sets the maximum number of locks. Only set this parameter to a higher value if Directory Server runs out of locks and logs libdb: Lock table is out of available locks error messages. If you set a higher value without a need, this increases the size of the /var/lib/dirsrv/slapd- instance_name /db__db.* files without any benefit. For more information about monitoring the logs and determining a realistic value, see the corresponding section in the Directory Server Performance Tuning Guide . The service must be restarted for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 Default Value 10000 Syntax Integer Example nsslapd-db-locks: 10000 4.4.2.12. nsslapd-db-locks-monitoring-enable Running out of database locks can lead to data corruption. With the nsslapd-db-locks-monitoring-enable parameter, you can enable or disable database lock monitoring. If the parameter is enabled, which is the default, Directory Server terminates all searches if the number of active database locks is higher than the percentage threshold configured in nsslapd-db-locks-monitoring-threshold . If an issue occurs, the administrator can increase the number of database locks in the nsslapd-db-locks parameter. Restart the service for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsslapd-db-locks-monitoring-enable: on 4.4.2.13. nsslapd-db-locks-monitoring-pause If monitoring of database locks is enabled in the nsslapd-db-locks-monitoring-enable parameter, nsslapd-db-locks-monitoring-pause defines the interval in milliseconds that the monitoring thread sleeps between the checks. If you set this parameter to a too high value, the server can run out of database locks before the monitoring check happens. However, setting a too low value can slow down the server. You do not have to restart the server for this setting to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 0 - 2147483647 (value in milliseconds) Default Value 500 Syntax DirectoryString Example nsslapd-db-locks-monitoring-pause: 500 4.4.2.14. nsslapd-db-locks-monitoring-threshold If monitoring of database locks is enabled in the nsslapd-db-locks-monitoring-enable parameter, nsslapd-db-locks-monitoring-threshold sets the maximum percentage of used database locks before Directory Server terminates searches to avoid further lock exhaustion. Restart the service for changes to this attribute to take effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 70 - 95 Default Value 90 Syntax DirectoryString Example nsslapd-db-locks-monitoring-threshold: 90 4.4.2.15. nsslapd-db-logbuf-size This attribute specifies the log information buffer size. Log information is stored in memory until the buffer fills up or the transaction commit forces the buffer to be written to disk. Larger buffer sizes can significantly increase throughput in the presence of long running transactions, highly concurrent applications, or transactions producing large amounts of data. The log information buffer size is the transaction log size divided by four. The nsslapd-db-logbuf-size attribute is only valid if the nsslapd-db-durable-transactions attribute is set to on . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 32K to maximum 32-bit integer (limited to the amount of memory available on the machine) Default Value 32K Syntax Integer Example nsslapd-db-logbuf-size: 32K 4.4.2.16. nsslapd-db-logdirectory This attribute specifies the path to the directory that contains the database transaction log. The database transaction log contains a sequential listing of all recent database operations. Directory Server uses this information to recover the database after an instance shut down unexpectedly. By default, the database transaction log is stored in the same directory as the directory database. To update this parameter, you must manually update the /etc/dirsrv/slapd- instance_name /dse.ldif file. For details, see the Changing the Transaction Log Directory section in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid path Default Value Syntax DirectoryString Example nsslapd-db-logdirectory: /var/lib/dirsrv/slapd- instance_name /db/ 4.4.2.17. nsslapd-db-logfile-size This attribute specifies the maximum size of a single file in the log in bytes. By default, or if the value is set to 0 , a maximum size of 10 megabytes is used. The maximum size is an unsigned 4-byte value. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to unsigned 4-byte integer Default Value 10MB Syntax Integer Example nsslapd-db-logfile-size: 10 MB 4.4.2.18. nsslapd-db-page-size This attribute specifies the size of the pages used to hold items in the database in bytes. The minimum size is 512 bytes, and the maximum size is 64 kilobytes. If the page size is not explicitly set, Directory Server defaults to a page size of 8 kilobytes. Changing this default value can have a significant performance impact. If the page size is too small, it results in extensive page splitting and copying, whereas if the page size is too large it can waste disk space. Before modifying the value of this attribute, export all databases using the db2ldif script. Once the modification has been made, reload the databases using the ldif2db script. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 512 bytes to 64 kilobytes Default Value 8KB Syntax Integer Example nsslapd-db-page-size: 8KB 4.4.2.19. nsslapd-db-spin-count This attribute specifies the number of times that test-and-set mutexes should spin without blocking. Warning Never touch this value unless you are very familiar with the inner workings of Berkeley DB or are specifically told to do so by Red Hat support. The default value of 0 causes BDB to calculate the actual value by multiplying the number of available CPU cores (as reported by the nproc utility or the sysconf(_SC_NPROCESSORS_ONLN) call) by 50 . For example, with a processor with 8 logical cores, leaving this attribute set to 0 is equivalent to setting it to 400 . It is not possible to turn spinning off entirely - if you want to minimize the amount of times test-and-set mutexes will spin without blocking, set this attribute to 1 . Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 2147483647 (2^31-1) Default Value 0 Syntax Integer Example nsslapd-db-spin-count: 0 4.4.2.20. nsslapd-db-transaction-batch-max-wait If Section 4.4.2.22, "nsslapd-db-transaction-batch-val" is set, the flushing of transactions is done by a separate thread when the set batch value is reached. However if there are only a few updates, this process might take too long. This parameter controls when transactions should be flushed latest, independently of the batch count. The values is defined in milliseconds. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 (value in milliseconds) Default Value 50 Syntax Integer Example nsslapd-db-transaction-batch-max-wait: 50 4.4.2.21. nsslapd-db-transaction-batch-min-wait If Section 4.4.2.22, "nsslapd-db-transaction-batch-val" is set, the flushing of transactions is done by a separate thread when the set batch value is reached. However if there are only a few updates, this process might take too long. This parameter controls when transactions should be flushed earliest, independently of the batch count. The values is defined in milliseconds. Warning This parameter is experimental. Never change its value unless you are specifically told to do so by the Red Hat support. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 - 2147483647 (value in milliseconds) Default Value 50 Syntax Integer Example nsslapd-db-transaction-batch-min-wait: 50 4.4.2.22. nsslapd-db-transaction-batch-val This attribute specifies how many transactions will be batched before being committed. This attribute can improve update performance when full transaction durability is not required. This attribute can be dynamically modified using ldapmodify . For further information on modifying this attribute, see the "Tuning Directory Server Performance" chapter in the Red Hat Directory Server Administration Guide . Warning Setting this value will reduce data consistency and may lead to loss of data. This is because if there is a power outage before the server can flush the batched transactions, those transactions in the batch will be lost. Do not set this value unless specifically requested to do so by Red Hat support. If this attribute is not defined or is set to a value of 0 , transaction batching will be turned off, and it will be impossible to make remote modifications to this attribute using LDAP. However, setting this attribute to a value greater than 0 causes the server to delay committing transactions until the number of queued transactions is equal to the attribute value. A value greater than 0 also allows modifications to this attribute remotely using LDAP. A value of 1 for this attribute allows modifications to the attribute setting remotely using LDAP, but results in no batching behavior. A value of 1 at server startup is therefore useful for maintaining normal durability while also allowing transaction batching to be turned on and off remotely when required. Remember that the value for this attribute may require modifying the nsslapd-db-logbuf-size attribute to ensure sufficient log buffer size for accommodating the batched transactions. Note The nsslapd-db-transaction-batch-val attribute is only valid if the nsslapd-db-durable-transaction attribute is set to on . For more information on database transaction logging, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 30 Default Value 0 (or turned off) Syntax Integer Example nsslapd-db-transaction-batch-val: 5 4.4.2.23. nsslapd-db-trickle-percentage This attribute sets that at least the specified percentage of pages in the shared-memory pool are clean by writing dirty pages to their backing files. This is to ensure that a page is always available for reading in new information without having to wait for a write. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 0 to 100 Default Value 40 Syntax Integer Example nsslapd-db-trickle-percentage: 40 4.4.2.24. nsslapd-db-verbose This attribute specifies whether to record additional informational and debugging messages when searching the log for checkpoints, doing deadlock detection, and performing recovery. This parameter is meant for troubleshooting, and enabling the parameter may slow down the Directory Server. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-db-verbose: off 4.4.2.25. nsslapd-import-cache-autosize This performance tuning-related attribute automatically sets the size of the import cache ( importCache ) to be used during the command-line-based import process of LDIF files to the database (the ldif2db operation). In Directory Server, the import operation can be run as a server task or exclusively on the command-line. In the task mode, the import operation runs as a general Directory Server operation. The nsslapd-import-cache-autosize attribute enables the import cache to be set automatically to a predetermined size when the import operation is run on the command-line. The attribute can also be used by Directory Server during the task mode import for allocating a specified percentage of free memory for import cache. By default, the nsslapd-import-cache-autosize attribute is enabled and is set to a value of -1 . This value autosizes the import cache for the ldif2db operation only, automatically allocating fifty percent (50%) of the free physical memory for the import cache. The percentage value (50%) is hard-coded and cannot be changed. Setting the attribute value to 50 ( nsslapd-import-cache-autosize: 50 ) has the same effect on performance during an ldif2db operation. However, such a setting will have the same effect on performance when the import operation is run as a Directory Server task. The -1 value autosizes the import cache just for the ldif2db operation and not for any, including import, general Directory Server tasks. Note The purpose of a -1 setting is to enable the ldif2db operation to benefit from free physical memory but, at the same time, not compete for valuable memory with the entry cache, which is used for general operations of the Directory Server. Setting the nsslapd-import-cache-autosize attribute value to 0 turns off the import cache autosizing feature - that is, no autosizing occurs during either mode of the import operation. Instead, Directory Server uses the nsslapd-import-cachesize attribute for import cache size, with a default value of 20000000 . There are three caches in the context of Directory Server: database cache, entry cache, and import cache. The import cache is only used during the import operation. The nsslapd-cache-autosize attribute, which is used for autosizing the entry cache and database cache, is used during the Directory Server operations only and not during the ldif2db command-line operation; the attribute value is the percentage of free physical memory to be allocated for the entry cache and database cache. If both the autosizing attributes, nsslapd-cache-autosize and nsslapd-import-cache-autosize , are enabled, ensure that their sum is less than 100. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range -1, 0 (turns import cache autosizing off) to 100 Default Value -1 (turns import cache autosizing on for ldif2db only and allocates 50% of the free physical memory to import cache) Syntax Integer Example nsslapd-import-cache-autosize: -1 4.4.2.26. nsslapd-dbcachesize This performance tuning-related attribute specifies the database index cache size, in bytes. This is one of the most important values for controlling how much physical RAM the directory server uses. This is not the entry cache. This is the amount of memory the Berkeley database back end will use to cache the indexes (the .db files) and other files. This value is passed to the Berkeley DB API function set_cachesize . If automatic cache resizing is activated, this attribute is overridden when the server replaces these values with its own guessed values at a later stage of the server startup. For more technical information on this attribute, see the cache size section of the Berkeley DB reference guide at https://docs.oracle.com/cd/E17076_04/html/programmer_reference/general_am_conf.html#am_conf_cachesize . Attempting to set a value that is not a number or is too big for a 32-bit signed integer returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Note Do not set the database cache size manually. Red Hat recommends to use the database cache auto-sizing feature for optimized performance. For further see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 4 gigabytes for 32-bit platforms and 500 kilobytes to 2^64-1 for 64-bit platforms Default Value Syntax Integer Example nsslapd-dbcachesize: 10000000 4.4.2.27. nsslapd-dbncache This attribute can split the LDBM cache into equally sized separate pieces of memory. It is possible to specify caches that are large enough so that they cannot be allocated contiguously on some architectures; for example, some systems limit the amount of memory that may be allocated contiguously by a process. If nsslapd-dbncache is 0 or 1 , the cache will be allocated contiguously in memory. If it is greater than 1 , the cache will be broken up into ncache , equally sized separate pieces of memory. To configure a dbcache size larger than 4 gigabytes, add the nsslapd-dbncache attribute to cn=config,cn=ldbm database,cn=plugins,cn=config between the nsslapd-dbcachesize and nsslapd-db-logdirectory attribute lines. Set this value to an integer that is one-quarter (1/4) the amount of memory in gigabytes. For example, for a 12 gigabyte system, set the nsslapd-dbncache value to 3 ; for an 8 gigabyte system, set it to 2 . This attribute is provided only for system modification/diagnostics and should be changed only with the guidance of Red Hat technical support or Red Hat professional services. Inconsistent settings of this attribute and other configuration attributes may cause the Directory Server to be unstable. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values 1 to 4 Default Value 1 Syntax Integer Example nsslapd-dbncache: 1 4.4.2.28. nsslapd-search-bypass-filter-test If you enable the nsslapd-search-bypass-filter-test parameter, Directory Server bypasses filter checks when it builds candidate lists during a search. If you set the parameter to verify , Directory Server evaluates the filter against the search candidate entries. Parameter Description Entry DN cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values on | off | verify Default Value on Syntax Directory String Example nsslapd-search-bypass-filter-test: on 4.4.3. Database Attributes under cn=monitor,cn=ldbm database,cn=plugins,cn=config Global read-only attributes containing database statistics for monitoring activity on the databases are stored in the cn=monitor,cn=ldbm database,cn=plugins,cn=config tree node. For more information on these entries, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administration Guide . dbcachehits This attribute shows the requested pages found in the database. dbcachetries This attribute shows the total cache lookups. dbcachehitratio This attribute shows the percentage of requested pages found in the database cache (hits/tries). dbcachepagein This attribute shows the pages read into the database cache. dbcachepageout This attribute shows the pages written from the database cache to the backing file. dbcacheroevict This attribute shows the clean pages forced from the cache. dbcacherwevict This attribute shows the dirty pages forced from the cache. normalizedDNcachetries Total number of cache lookups since the instance was started. normalizedDNcachehits Normalized DNs found within the cache. normalizedDNcachemisses Normalized DNs not found within the cache. normalizedDNcachehitratio Percentage of the normalized DNs found in the cache. currentNormalizedDNcachesize Current size of the normalized DN cache in bytes. maxNormalizedDNcachesize Current value of the nsslapd-ndn-cache-max-size parameter. For details how to update this setting, see Section 3.1.1.130, "nsslapd-ndn-cache-max-size" . currentNormalizedDNcachecount Number of normalized cached DNs. 4.4.4. Database Attributes under cn= database_name ,cn=ldbm database,cn=plugins,cn=config The cn= database_name subtree contains all the configuration data for the user-defined database. The cn=userRoot subtree is called userRoot by default. However, this is not hard-coded and, given the fact that there are going to be multiple database instances, this name is changed and defined by the user as and when new databases are added. The cn=userRoot database referenced can be any user database. The following attributes are common to databases, such as cn=userRoot . 4.4.4.1. nsslapd-cachesize This attribute has been deprecated. To resize the entry cache, use nsslapd-cachememsize. This performance tuning-related attribute specifies the cache size in terms of the number of entries it can hold. However, this attribute is deprecated in favor of the nsslapd-cachememsize attribute, which sets an absolute allocation of RAM for the entry cache size, as described in Section 4.4.4.2, "nsslapd-cachememsize" . Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. The server has to be restarted for changes to this attribute to go into effect. Note The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 1 to 2 32 -1 on 32-bit systems or 2 63 -1 on 64-bit systems or -1, which means limitless Default Value -1 Syntax Integer Example nsslapd-cachesize: -1 4.4.4.2. nsslapd-cachememsize This performance tuning-related attribute specifies the size, in bytes, for the available memory space for the entry cache. The simplest method is limiting cache size in terms of memory occupied. Activating automatic cache resizing overrides this attribute, replacing these values with its own guessed values at a later stage of the server startup. Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Note Do not set the database cache size manually. Red Hat recommends to use the entry cache auto-sizing feature for optimized performance. For further see the corresponding section in the Red Hat Directory Server Performance Tuning Guide . Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 2 64 -1 on 64-bit systems Default Value 209715200 (200 MiB) Syntax Integer Example nsslapd-cachememsize: 209715200 4.4.4.3. nsslapd-directory This attribute specifies the path to the database instance. If it is a relative path, it starts from the path specified by nsslapd-directory in the global database entry cn=config,cn=ldbm database,cn=plugins,cn=config . The database instance directory is named after the instance name and located in the global database directory, by default. After the database instance has been created, do not modify this path, because any changes risk preventing the server from accessing data. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid path to the database instance Default Value Syntax DirectoryString Example nsslapd-directory: /var/lib/dirsrv/slapd- instance /db/userRoot 4.4.4.4. nsslapd-dncachememsize This performance tuning-related attribute specifies the size, in bytes, for the available memory space for the DN cache. The DN cache is similar to the entry cache for a database, only its table stores only the entry ID and the entry DN. This allows faster lookups for rename and moddn operations. The simplest method is limiting cache size in terms of memory occupied. Attempting to set a value that is not a number or is too big for a 32-bit signed integer (on 32-bit systems) returns an LDAP_UNWILLING_TO_PERFORM error message with additional error information explaining the problem. Note The performance counter for this setting goes to the highest 64-bit integer, even on 32-bit systems, but the setting itself is limited on 32-bit systems to the highest 32-bit integer because of how the system addresses memory. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Range 500 kilobytes to 2 32 -1 on 32-bit systems and to 2 64 -1 on 64-bit systems Default Value 10,485,760 (10 megabytes) Syntax Integer Example nsslapd-dncachememsize: 10485760 4.4.4.5. nsslapd-readonly This attribute specifies read-only mode for a single back-end instance. If this attribute has a value of off , then users have all read, write, and execute permissions allowed by their access permissions. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-readonly: off 4.4.4.6. nsslapd-require-index When switched to on , this attribute allows one to refuse unindexed searches. This performance-related attribute avoids saturating the server with erroneous searches. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-require-index: off 4.4.4.7. nsslapd-require-internalop-index When a plug-in modifies data, it has a write lock on the database. On large databases, if a plug-in then executes an unindexed search, the plug-in can use all database locks and corrupt the database or the server becomes unresponsive. To avoid this problem, you can reject internal unindexed searches by enabling the nsslapd-require-internalop-index parameter. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsslapd-require-internalop-index: off 4.4.4.8. nsslapd-suffix This attribute specifies the suffix of the database link . This is a single-valued attribute because each database instance can have only one suffix. Previously, it was possible to have more than one suffix on a single database instance, but this is no longer the case. As a result, this attribute is single-valued to enforce the fact that each database instance can only have one suffix entry. Any changes made to this attribute after the entry has been created take effect only after the server containing the database link is restarted. Parameter Description Entry DN cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example nsslapd-suffix: o=Example 4.4.4.9. vlvBase This attribute sets the base DN for which the browsing or virtual list view (VLV) index is created. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid DN Default Value Syntax DirectoryString Example vlvBase: ou=People,dc=example,dc=com 4.4.4.10. vlvEnabled The vlvEnabled attribute provides status information about a specific VLV index, and Directory Server sets this attribute at run time. Although vlvEnabled is shown in the configuration, you cannot modify this attribute. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values 0 (disabled) | 1 (enabled) Default Value 1 Syntax DirectoryString Example vlvEnbled: 0 4.4.4.11. vlvFilter The browsing or virtual list view (VLV) index is created by running a search according to a filter and including entries which match that filter in the index. The filter is specified in the vlvFilter attribute. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid LDAP filter Default Value Syntax DirectoryString Example vlvFilter: ( 4.4.4.12. vlvIndex (Object Class) A browsing index or virtual list view (VLV) index dynamically generates an abbreviated index of entry headers that makes it much faster to visually browse large indexes. A VLV index definition has two parts: one which defines the index and one which defines the search used to identify entries to add to the index. The vlvIndex object class defines the index entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.42 Table 4.2. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. Section 4.4.4.15, "vlvSort" Identifies the attribute list that the browsing index (virtual list view index) is sorted on. Table 4.3. Allowed Attributes Attribute Definition Section 4.4.4.10, "vlvEnabled" Stores the availability of the browsing index. Section 4.4.4.16, "vlvUses" Contains the count the browsing index is used. 4.4.4.13. vlvScope This attribute sets the scope of the search to run for entries in the browsing or virtual list view (VLV) index. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values * 1 (one-level or children search) * 2 (subtree search) Default Value Syntax Integer Example vlvScope: 2 4.4.4.14. vlvSearch (Object Class) A browsing index or virtual list view (VLV) index dynamically generates an abbreviated index of entry headers that makes it much faster to visually browse large indexes. A VLV index definition has two parts: one which defines the index and one which defines the search used to identify entries to add to the index. The vlvSearch object class defines the search filter entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.38 Table 4.4. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. Section 4.4.4.9, "vlvBase" Identifies base DN the browsing index is created. Section 4.4.4.13, "vlvScope" Identifies the scope to define the browsing index. Section 4.4.4.11, "vlvFilter" Identifies the filter string to define the browsing index. Table 4.5. Allowed Attributes Attribute Definition multiLineDescription Gives a text description of the entry. 4.4.4.15. vlvSort This attribute sets the sort order for returned entries in the browsing or virtual list view (VLV) index. Note The entry for this attribute is a vlvIndex entry beneath the vlvSearch entry. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values Any Directory Server attributes, in a space-separated list Default Value Syntax DirectoryString Example vlvSort: cn givenName o ou sn 4.4.4.16. vlvUses The vlvUses attribute contains the count the browsing index uses, and Directory Server sets this attribute at run time. Although vlvUses is shown in the configuration, you cannot modify this attribute. For more information on VLV indexes, see the indexing chapter in the Administration Guide . Parameter Description Entry DN cn= index_name ,cn=userRoot,cn=ldbm database,cn=plugins,cn=config Valid Values N/A Default Value Syntax DirectoryString Example vlvUses: 800 4.4.5. Database Attributes under cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config The attributes in this tree node entry are all read-only, database performance counters. All of the values for these attributes are 32-bit integers, except for entrycachehits and entrycachetries . If the nsslapd-counters attribute in cn=config is set to on , then some of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For the database monitoring, the entrycachehits and entrycachetries counters use 64-bit integers. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. nsslapd-db-abort-rate This attribute shows the number of transactions that have been aborted. nsslapd-db-active-txns This attribute shows the number of transactions that are currently active. nsslapd-db-cache-hit This attribute shows the requested pages found in the cache. nsslapd-db-cache-try This attribute shows the total cache lookups. nsslapd-db-cache-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. nsslapd-db-cache-size-bytes This attribute shows the total cache size in bytes. nsslapd-db-clean-pages This attribute shows the clean pages currently in the cache. nsslapd-db-commit-rate This attribute shows the number of transactions that have been committed. nsslapd-db-deadlock-rate This attribute shows the number of deadlocks detected. nsslapd-db-dirty-pages This attribute shows the dirty pages currently in the cache. nsslapd-db-hash-buckets This attribute shows the number of hash buckets in buffer hash table. nsslapd-db-hash-elements-examine-rate This attribute shows the total number of hash elements traversed during hash table lookups. nsslapd-db-hash-search-rate This attribute shows the total number of buffer hash table lookups. nsslapd-db-lock-conflicts This attribute shows the total number of locks not immediately available due to conflicts. nsslapd-db-lock-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. nsslapd-db-lock-request-rate This attribute shows the total number of locks requested. nsslapd-db-lockers This attribute shows the number of current lockers. nsslapd-db-log-bytes-since-checkpoint This attribute shows the number of bytes written to this log since the last checkpoint. nsslapd-db-log-region-wait-rate This attribute shows the number of times that a thread of control was forced to wait before obtaining the region lock. nsslapd-db-log-write-rate This attribute shows the number of megabytes and bytes written to this log. nsslapd-db-longest-chain-length This attribute shows the longest chain ever encountered in buffer hash table lookups. nsslapd-db-page-create-rate This attribute shows the pages created in the cache. nsslapd-db-page-read-rate This attribute shows the pages read into the cache. nsslapd-db-page-ro-evict-rate This attribute shows the clean pages forced from the cache. nsslapd-db-page-rw-evict-rate This attribute shows the dirty pages forced from the cache. nsslapd-db-page-trickle-rate This attribute shows the dirty pages written using the memp_trickle interface. nsslapd-db-page-write-rate This attribute shows the pages read into the cache. nsslapd-db-pages-in-use This attribute shows all pages, clean or dirty, currently in use. nsslapd-db-txn-region-wait-rate This attribute shows the number of times that a thread of control was force to wait before obtaining the region lock. currentdncachecount This attribute shows the number of DNs currently present in the DN cache. currentdncachesize This attribute shows the total size, in bytes, of DNs currently present in the DN cache. maxdncachesize This attribute shows the maximum size, in bytes, of DNs that can be maintained in the database DN cache. 4.4.6. Database Attributes under cn=monitor,cn=userRoot,cn=ldbm database,cn=plugins,cn=config The attributes in this tree node entry are all read-only, database performance counters. If the nsslapd-counters attribute in cn=config is set to on , then some of the counters kept by the Directory Server instance increment using 64-bit integers, even on 32-bit machines or with a 32-bit version of Directory Server. For database monitoring, the entrycachehits and entrycachetries counters use 64-bit integers. Note The nsslapd-counters attribute enables 64-bit support for these specific database and server counters. The counters which use 64-bit integers are not configurable; the 64-bit integers are either enabled for all the allowed counters or disabled for all allowed counters. dbfilename- number This attribute gives the name of the file and provides a sequential integer identifier (starting at 0) for the file. All associated statistics for the file are given this same numerical identifier. dbfilecachehit- number This attribute gives the number of times that a search requiring data from this file was performed and that the data were successfully obtained from the cache. The number in this attributes name corresponds to the one in dbfilename . dbfilecachemiss- number This attribute gives the number of times that a search requiring data from this file was performed and that the data could not be obtained from the cache. The number in this attributes name corresponds to the one in dbfilename . dbfilepagein- number This attribute gives the number of pages brought to the cache from this file. The number in this attributes name corresponds to the one in dbfilename . dbfilepageout- number This attribute gives the number of pages for this file written from cache to disk. The number in this attributes name corresponds to the one in dbfilename . currentDNcachecount Number of cached DNs. currentDNcachesize Current size of the DN cache in bytes. DNcachehitratio Percentage of the DNs found in the cache. DNcachehits DNs found within the cache. DNcachemisses DNs not found within the cache. DNcachetries Total number of cache lookups since the instance was started. maxDNcachesize Current value of the nsslapd-ndn-cache-max-size parameter. For details how to update this setting, see Section 3.1.1.130, "nsslapd-ndn-cache-max-size" . 4.4.7. Database Attributes under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config The set of default indexes is stored here. Default indexes are configured per back end in order to optimize Directory Server functionality for the majority of setup scenarios. All indexes, except system-essential ones, can be removed, but care should be taken so as not to cause unnecessary disruptions. For further information on indexes, see the "Managing Indexes" chapter in the Red Hat Directory Server Administration Guide . 4.4.7.1. cn This attribute provides the name of the attribute to index. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid index cn Default Value None Syntax DirectoryString Example cn: aci 4.4.7.2. nsIndex This object class defines an index in the back end database. This object is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.44 Table 4.6. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. Section 4.4.7.5, "nsSystemIndex" Identify whether or not the index is a system defined index. Table 4.7. Allowed Attributes Attribute Definition description Gives a text description of the entry. Section 4.4.7.3, "nsIndexType" Identifies the index type. Section 4.4.7.4, "nsMatchingRule" Identifies the matching rule. 4.4.7.3. nsIndexType This optional, multi-valued attribute specifies the type of index for Directory Server operations and takes the values of the attributes to be indexed. Each required index type has to be entered on a separate line. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values * pres = presence index * eq = equality index * approx = approximate index * sub = substring index * matching rule = international index * index browse = browsing index Default Value Syntax DirectoryString Example nsIndexType: eq 4.4.7.4. nsMatchingRule This optional, multi-valued attribute specifies the ordering matching rule name or OID used to match values and to generate index keys for the attribute. This is most commonly used to ensure that equality and range searches work correctly for languages other than English (7-bit ASCII). This is also used to allow range searches to work correctly for integer syntax attributes that do not specify an ordering matching rule in their schema definition. uidNumber and gidNumber are two commonly used attributes that fall into this category. For example, for a uidNumber that uses integer syntax, the rule attribute could be nsMatchingRule: integerOrderingMatch . Note Any change to this attribute will not take effect until the change is saved and the index is rebuilt using db2index , which is described in more detail in the "Managing Indexes" chapter of the Red Hat Directory Server Administration Guide ). Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values Any valid collation order object identifier (OID) Default Value None Syntax DirectoryString Example nsMatchingRule: 2.16.840.1.113730.3.3.2.3.1 (For Bulgarian) 4.4.7.5. nsSystemIndex This mandatory attribute specifies whether the index is a system index , an index which is vital for Directory Server operations. If this attribute has a value of true , then it is system-essential. System indexes should not be removed, as this will seriously disrupt server functionality. Parameter Description Entry DN cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config Valid Values true | false Default Value Syntax DirectoryString Example nssystemindex: true 4.4.8. Database Attributes under cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config In addition to the set of default indexes that are stored under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config , custom indexes can be created for user-defined back end instances; these are stored under cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config . Each indexed attribute represents a subentry under the cn=config information tree nodes, as shown in the following diagram: Figure 4.2. Indexed Attribute Representing a Subentry For example, the index file for the aci attribute under o=UserRoot appears in the Directory Server as follows: These entries share all of the indexing attributes listed for the default indexes in Section 4.4.7, "Database Attributes under cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config" . For further information about indexes, see the "Managing Indexes" chapter in the Red Hat Directory Server Administration Guide . 4.4.8.1. nsIndexIDListScanLimit This multi-valued parameter defines a search limit for certain indices or to use no ID list. For further information, see the corresponding section in the Directory Server Performance Tuning Guide . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values See the corresponding section in the Directory Server Performance Tuning Guide . Default Value Syntax DirectoryString Example nsIndexIDListScanLimit: limit=0 type=eq values=inetorgperson 4.4.8.2. nsSubStrBegin By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrBegin attribute sets the required number of characters for an indexed search for the beginning of a search string, before the wildcard. For example: If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrBegin: 2 4.4.8.3. nsSubStrEnd By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrEnd attribute sets the required number of characters for an indexed search for the end of a search string, after the wildcard. For example: If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrEnd: 2 4.4.8.4. nsSubStrMiddle By default, for a search to be indexed, the search string must be at least three characters long, without counting any wildcard characters. For example, the string abc would be an indexed search while ab* would not be. Indexed searches are significantly faster than unindexed searches, so changing the minimum length of the search key is helpful to increase the number of indexed searches. This substring length can be edited based on the position of any wildcard characters. The nsSubStrMiddle attribute sets the required number of characters for an indexed search where a wildcard is used in the middle of a search string. For example: If the value of this attribute is changed, then the index must be regenerated using db2index . Parameter Description Entry DN cn= attribute_name ,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values Any integer Default Value 3 Syntax Integer Example nsSubStrMiddle: 3 4.4.9. Database Attributes under cn=attributeName,cn=encrypted attributes,cn=database_name,cn=ldbm database,cn=plugins,cn=config The nsAttributeEncryption object class allows selective encryption of attributes within a database. Extremely sensitive information such as credit card numbers and government identification numbers may not be protected enough by routine access control measures. Normally, these attribute values are stored in CLEAR within the database; encrypting them while they are stored adds another layer of protection. This object class has one attribute, nsEncryptionAlgorithm , which sets the encryption cipher used per attribute. Each encrypted attribute represents a subentry under the above cn=config information tree nodes, as shown in the following diagram: Figure 4.3. Encrypted Attributes under the cn=config Node For example, the database encryption file for the userPassword attribute under o=UserRoot appears in the Directory Server as follows: To configure database encryption, see the "Database Encryption" section of the "Configuring Directory Databases" chapter in the Red Hat Directory Server Administration Guide . For more information about indexes, see the "Managing Indexes" chapter in the Red Hat Directory Server Administration Guide . 4.4.9.1. nsAttributeEncryption (Object Class) This object class is used for core configuration entries which identify and encrypt selected attributes within a Directory Server database. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.316 Table 4.8. Required Attributes objectClass Defines the object classes for the entry. cn Specifies the attribute being encrypted using its common name. Section 4.4.9.2, "nsEncryptionAlgorithm" The encryption cipher used. 4.4.9.2. nsEncryptionAlgorithm nsEncryptionAlgorithm selects the cipher used by nsAttributeEncryption . The algorithm can be set per encrypted attribute. Parameter Description Entry DN cn=attributeName,cn=encrypted attributes,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Valid Values The following are supported ciphers: * Advanced Encryption Standard Block Cipher (AES) * Triple Data Encryption Standard Block Cipher (3DES) Default Value Syntax DirectoryString Example nsEncryptionAlgorithm: AES 4.5. Database Link Plug-in Attributes (Chaining Attributes) The database link plug-in attributes are also organized in an information tree, as shown in the following diagram: Figure 4.4. Database Link Plug-in All plug-in technology used by the database link instances is stored in the cn=chaining database plug-in node. This section presents the additional attribute information for the three nodes marked in bold in the cn=chaining database,cn=plugins,cn=config information tree in Figure 4.4, "Database Link Plug-in" . 4.5.1. Database Link Attributes under cn=config,cn=chaining database,cn=plugins,cn=config This section covers global configuration attributes common to all instances are stored in the cn=config,cn=chaining database,cn=plugins,cn=config tree node. 4.5.1.1. nsActiveChainingComponents This attribute lists the components using chaining. A component is any functional unit in the server. The value of this attribute overrides the value in the global configuration attribute. To disable chaining on a particular database instance, use the value None . This attribute also allows the components used to chain to be altered. By default, no components are allowed to chain, which explains why this attribute will probably not appear in a list of cn=config,cn=chaining database,cn=config attributes, as LDAP considers empty attributes to be non-existent. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid component entry Default Value None Syntax DirectoryString Example nsActiveChainingComponents: cn=uid uniqueness,cn=plugins,cn=config 4.5.1.2. nsMaxResponseDelay This error detection, performance-related attribute specifies the maximum amount of time it can take a remote server to respond to an LDAP operation request made by a database link before an error is suspected. Once this delay period has been met, the database link tests the connection with the remote server. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid delay period in seconds Default Value 60 seconds Syntax Integer Example nsMaxResponseDelay: 60 4.5.1.3. nsMaxTestResponseDelay This error detection, performance-related attribute specifies the duration of the test issued by the database link to check whether the remote server is responding. If a response from the remote server is not returned before this period has passed, the database link assumes the remote server is down, and the connection is not used for subsequent operations. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid delay period in seconds Default Value 15 seconds Syntax Integer Example nsMaxTestResponseDelay: 15 4.5.1.4. nsTransmittedControls This attribute, which can be both a global (and thus dynamic) configuration or an instance (that is, cn= database link instance , cn=chaining database,cn=plugins,cn=config ) configuration attribute, allows the controls the database link forwards to be altered. The following controls are forwarded by default by the database link: Managed DSA (OID: 2.16.840.1.113730.3.4.2) Virtual list view (VLV) (OID: 2.16.840.1.113730.3.4.9) Server side sorting (OID: 1.2.840.113556.1.4.473) Loop detection (OID: 1.3.6.1.4.1.1466.29539.12) Other controls, such as dereferencing and simple paged results for searches, can be added to the list of controls to forward. Parameter Description Entry DN cn=config,cn=chaining database,cn=plugins,cn=config Valid Values Any valid OID or the above listed controls forwarded by the database link Default Value None Syntax Integer Example nsTransmittedControls: 1.2.840.113556.1.4.473 4.5.2. Database Link Attributes under cn=default instance config,cn=chaining database,cn=plugins,cn=config Default instance configuration attributes for instances are housed in the cn=default instance config,cn=chaining database,cn=plugins,cn=config tree node. 4.5.2.1. nsAbandonedSearchCheckInterval This attribute shows the number of seconds that pass before the server checks for abandoned operations. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to maximum 32-bit integer (2147483647) seconds Default Value 1 Syntax Integer Example nsAbandonedSearchCheckInterval: 10 4.5.2.2. nsBindConnectionsLimit This attribute shows the maximum number of TCP connections the database link establishes with the remote server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 50 connections Default Value 3 Syntax Integer Example nsBindConnectionsLimit: 3 4.5.2.3. nsBindRetryLimit Contrary to what the name suggests, this attribute does not specify the number of times a database link re tries to bind with the remote server but the number of times it tries to bind with the remote server. A value of 1 here indicates that the database link only attempts to bind once. Note Retries only occur for connection failures and not for other types of errors, such as invalid bind DNs or bad passwords. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to 5 Default Value 3 Syntax Integer Example nsBindRetryLimit: 3 4.5.2.4. nsBindTimeout This attribute shows the amount of time before the bind attempt times out. There is no real valid range for this attribute, except reasonable patience limits. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to 60 seconds Default Value 15 Syntax Integer Example nsBindTimeout: 15 4.5.2.5. nsCheckLocalACI Reserved for advanced use only. This attribute controls whether ACIs are evaluated on the database link as well as the remote data server. Changes to this attribute only take effect once the server has been restarted. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsCheckLocalACI: on 4.5.2.6. nsConcurrentBindLimit This attribute shows the maximum number of concurrent bind operations per TCP connection. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 25 binds Default Value 10 Syntax Integer Example nsConcurrentBindLimit: 10 4.5.2.7. nsConcurrentOperationsLimit This attribute specifies the maximum number of concurrent operations allowed. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to 50 operations Default Value 2 Syntax Integer Example nsConcurrentOperationsLimit: 5 4.5.2.8. nsConnectionLife This attribute specifies connection lifetime. Connections between the database link and the remote server can be kept open for an unspecified time or closed after a specific period of time. It is faster to keep the connections open, but it uses more resources. When the value is 0 and a list of failover servers is provided in the nsFarmServerURL attribute, the main server is never contacted after failover to the alternate server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 0 to limitless seconds (where 0 means forever) Default Value 0 Syntax Integer Example nsConnectionLife: 0 4.5.2.9. nsOperationConnectionsLimit This attribute shows the maximum number of LDAP connections the database link establishes with the remote server. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range 1 to n connections Default Value 20 Syntax Integer Example nsOperationConnectionsLimit: 10 4.5.2.10. nsProxiedAuthorization Reserved for advanced use only. If you disable proxied authorization, binds for chained operations are executed as the user set in the nsMultiplexorBindDn attribute. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value on Syntax DirectoryString Example nsProxiedAuthorization: on 4.5.2.11. nsReferralOnScopedSearch This attribute controls whether referrals are returned by scoped searches. This attribute can be used to optimize the directory because returning referrals in response to scoped searches is more efficient. A referral is returned to all the configured farm servers. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example nsReferralOnScopedSearch: off 4.5.2.12. nsSizeLimit This attribute shows the default size limit for the database link in bytes. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range -1 (no limit) to maximum 32-bit integer (2147483647) entries Default Value 2000 Syntax Integer Example nsSizeLimit: 2000 4.5.2.13. nsTimeLimit This attribute shows the default search time limit for the database link. Parameter Description Entry DN cn=default instance config,cn=chaining database,cn=plugins,cn=config Valid Range -1 to maximum 32-bit integer (2147483647) seconds Default Value 3600 Syntax Integer Example nsTimeLimit: 3600 4.5.3. Database Link Attributes under cn=database_link_name,cn=chaining database,cn=plugins,cn=config This information node stores the attributes concerning the server containing the data. A farm server is a server which contains data on databases. This attribute can contain optional servers for failover, separated by spaces. For cascading chaining, this URL can point to another database link. 4.5.3.1. nsBindMechanism This attribute sets a bind mechanism for the farm server to connect to the remote server. A farm server is a server containing data in one or more databases. This attribute configures the connection type, either standard, TLS, or SASL. empty. This performs simple authentication and requires the nsMultiplexorBindDn and nsMultiplexorCredentials attributes to give the bind information. EXTERNAL. This uses an TLS certificate to authenticate the farm server to the remote server. Either the farm server URL must be set to the secure URL ( ldaps ) or the nsUseStartTLS attribute must be set to on . Additionally, the remote server must be configured to map the farm server's certificate to its bind identity. Certificate mapping is described in the Administration Guide . DIGEST-MD5. This uses SASL with DIGEST-MD5 encryption. As with simple authentication, this requires the nsMultiplexorBindDn and nsMultiplexorCredentials attributes to give the bind information. GSSAPI. This uses Kerberos-based authentication over SASL. The farm server must be connected over the standard port, meaning the URL has ldap , because the Directory Server does not support SASL/GS-API over TLS. The farm server must be configured with a Kerberos keytab, and the remote server must have a defined SASL mapping for the farm server's bind identity. Setting up Kerberos keytabs and SASL mappings is described in the Administration Guide . Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values * empty * EXTERNAL * DIGEST-MD5 * GSSAPI Default Value empty Syntax DirectoryString Example nsBindMechanism: GSSAPI 4.5.3.2. nsFarmServerURL This attribute gives the LDAP URL of the remote server. A farm server is a server containing data in one or more databases. This attribute can contain optional servers for failover, separated by spaces. If using cascading changing, this URL can point to another database link. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Any valid remote server LDAP URL Default Value Syntax DirectoryString Example nsFarmServerURL: ldap://farm1.example.com farm2.example.com:389 farm3.example.com:1389/ 4.5.3.3. nsMultiplexorBindDN This attribute gives the DN of the administrative entry used to communicate with the remote server. The multiplexor is the server that contains the database link and communicates with the farm server. This bind DN cannot be the Directory Manager, and, if this attribute is not specified, the database link binds as anonymous . Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Default Value DN of the multiplexor Syntax DirectoryString Example nsMultiplexerBindDN: cn=proxy manager 4.5.3.4. nsMultiplexorCredentials Password for the administrative user, given in plain text. If no password is provided, it means that users can bind as anonymous . The password is encrypted in the configuration file. The example below is what is shown, not what is typed. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values Any valid password, which will then be encrypted using the DES reversible password encryption schema Default Value Syntax DirectoryString Example nsMultiplexerCredentials: {DES} 9Eko69APCJfF 4.5.3.5. nshoplimit This attribute specifies the maximum number of times a database is allowed to chain; that is, the number of times a request can be forwarded from one database link to another. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Range 1 to an appropriate upper limit for the deployment Default Value 10 Syntax Integer Example nsHopLimit: 3 4.5.3.6. nsUseStartTLS This attribute sets whether to use Start TLS to initiate a secure, encrypted connection over an insecure port. This attribute can be used if the nsBindMechanism attribute is set to EXTERNAL but the farm server URL set to the standard URL ( ldap ) or if the nsBindMechanism attribute is left empty. Parameter Description Entry DN cn= database_link_name ,cn=chaining database,cn=plugins,cn=config Valid Values off | on Default Value off Syntax DirectoryString Example nsUseStartTLS: on 4.5.4. Database Link Attributes under cn=monitor,cn=database instance name,cn=chaining database,cn=plugins,cn=config Attributes used for monitoring activity on the instances are stored in the cn=monitor,cn=database instance name,cn=chaining database,cn=plugins,cn=config information tree. nsAddCount This attribute gives the number of add operations received. nsDeleteCount This attribute gives the number of delete operations received. nsModifyCount This attribute gives the number of modify operations received. nsRenameCount This attribute gives the number of rename operations received. nsSearchBaseCount This attribute gives the number of base level searches received. nsSearchOneLevelCount This attribute gives the number of one-level searches received. nsSearchSubtreeCount This attribute gives the number of subtree searches received. nsAbandonCount This attribute gives the number of abandon operations received. nsBindCount This attribute gives the number of bind requests received. nsUnbindCount This attribute gives the number of unbinds received. nsCompareCount This attribute gives the number of compare operations received. nsOperationConnectionCount This attribute gives the number of open connections for normal operations. nsOpenBindConnectionCount This attribute gives the number of open connections for bind operations. 4.6. PAM Pass Through Auth Plug-in Attributes Local PAM configurations on Unix systems can leverage an external authentication store for LDAP users. This is a form of pass-through authentication which allows the Directory Server to use the externally-stored user credentials for directory access. PAM pass-through authentication is configured in child entries beneath the PAM Pass Through Auth Plug-in container entry. All of the possible configuration attributes for PAM authentication (defined in the 60pam-plugin.ldif schema file) are available to a child entry; the child entry must be an instance of the PAM configuration object class. Example 4.1. Example PAM Pass Through Auth Configuration Entries The PAM configuration, at a minimum, must define a mapping method (a way to identify what the PAM user ID is from the Directory Server entry), the PAM server to use, and whether to use a secure connection to the service. The configuration can be expanded for special settings, such as to exclude or specifically include subtrees or to map a specific attribute value to the PAM user ID. 4.6.1. pamConfig (Object Class) This object class is used to define the PAM configuration to interact with the directory service. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.318 Allowed Attributes Section 4.6.2, "pamExcludeSuffix" Section 4.6.7, "pamIncludeSuffix" Section 4.6.8, "pamMissingSuffix" Section 4.6.4, "pamFilter" Section 4.6.5, "pamIDAttr" Section 4.6.6, "pamIDMapMethod" Section 4.6.3, "pamFallback" Section 4.6.10, "pamSecure" Section 4.6.11, "pamService" nsslapd-pluginConfigArea 4.6.2. pamExcludeSuffix This attribute specifies a suffix to exclude from PAM authentication. OID 2.16.840.1.113730.3.1.2068 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 4.6.3. pamFallback Sets whether to fallback to regular LDAP authentication if PAM authentication fails. OID 2.16.840.1.113730.3.1.2072 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.4. pamFilter Sets an LDAP filter to use to identify specific entries within the included suffixes for which to use PAM pass-through authentication. If not set, all entries within the suffix are targeted by the configuration entry. OID 2.16.840.1.113730.3.1.2131 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.5. pamIDAttr This attribute contains the attribute name which is used to hold the PAM user ID. OID 2.16.840.1.113730.3.1.2071 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 4.6.6. pamIDMapMethod Gives the method to use to map the LDAP bind DN to a PAM identity. Note Directory Server user account inactivation is only validated using the ENTRY mapping method. With RDN or DN, a Directory Server user whose account is inactivated can still bind to the server successfully. OID 2.16.840.1.113730.3.1.2070 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.7. pamIncludeSuffix This attribute sets a suffix to include for PAM authentication. OID 2.16.840.1.113730.3.1.2067 Syntax DN Multi- or Single-Valued Multi-valued Defined in Directory Server 4.6.8. pamMissingSuffix Identifies how to handle missing include or exclude suffixes. The options are ERROR (which causes the bind operation to fail); ALLOW, which logs an error but allows the operation to proceed; and IGNORE, which allows the operation and does not log any errors. OID 2.16.840.1.113730.3.1.2069 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.9. pamModuleIsThreadSafe By default, Directory Server serializes the Pluggable Authentication Module (PAM) authentications. If you set the pamModuleIsThreadSafe attribute to on , Directory Server starts to perform PAM authentications in parallel. However, ensure that the PAM module you are using is a thread safe module. Currently, you can use the ldapmodify utility to configure the pamModuleIsThreadSafe attribute: To apply changes, restart the server. OID 2.16.840.1.113730.3.1.2399 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.10. pamSecure Requires secure TLS connection for PAM authentication. OID 2.16.840.1.113730.3.1.2073 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Directory Server 4.6.11. pamService Contains the service name to pass to PAM. This assumes that the service specified has a configuration file in the /etc/pam.d/ directory. Important The pam_fprintd.so module cannot be in the configuration file referenced by the pamService attribute of the PAM Pass-Through Authentication Plug-in configuration. Using the PAM pam_fprintd.so module causes the Directory Server to hit the max file descriptor limit and can cause the Directory Server process to abort. Important The pam_fprintd.so module cannot be in the configuration file referenced by the pamService attribute of the PAM Pass-Through Authentication Plug-in configuration. Using the PAM fprintd module causes the Directory Server to hit the max file descriptor limit and can cause the Directory Server process to abort. OID 2.16.840.1.113730.3.1.2074 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Directory Server 4.7. Account Policy Plug-in Attributes Account policies can be set that automatically lock an account after a certain amount of time has elapsed. This can be used to create temporary accounts that are only valid for a preset amount of time or to lock users which have been inactive for a certain amount of time. The Account Policy Plug-in itself only accept on argument, which points to a plug-in configuration entry. The account policy configuration entry defines, for the entire server, what attributes to use for account policies. Most of the configuration defines attributes to use to evaluate account policies and expiration times, but the configuration also defines what object class to use to identify subtree-level account policy definitions. One the plug-in is configured globally, account policy entries can be created within the user subtrees, and then these policies can be applied to users and to roles through classes of service. Example 4.2. Account Policy Definition Any entry, both individual users and roles or CoS templates, can be an account policy subentry. Every account policy subentry has its creation and login times tracked against any expiration policy. Example 4.3. User Account with Account Policy 4.7.1. altstateattrname Account expiration policies are based on some timed criteria for the account. For example, for an inactivity policy, the primary criteria may be the last login time, lastLoginTime . However, there may be instances where that attribute does not exist on an entry, such as a user who never logged into his account. The altstateattrname attribute provides a backup attribute for the server to reference to evaluate the expiration time. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example altstateattrname: createTimeStamp 4.7.2. alwaysRecordLogin By default, only entries which have an account policy directly applied to them - meaning, entries with the acctPolicySubentry attribute - have their login times tracked. If account policies are applied through classes of service or roles, then the acctPolicySubentry attribute is on the template or container entry, not the user entries themselves. The alwaysRecordLogin attribute sets that every entry records its last login time. This allows CoS and roles to be used to apply account policies. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range yes | no Default Value no Syntax DirectoryString Example alwaysRecordLogin: no 4.7.3. alwaysRecordLoginAttr The Account Policy plug-in uses the attribute name set in the alwaysRecordLoginAttr parameter to store the time of the last successful login in this attribute in the user's directory entry. For further information, see the corresponding section in the Directory Server Administration Guide . Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any valid attribute name Default Value stateAttrName Syntax DirectoryString Example alwaysRecordLoginAttr: lastLoginTime 4.7.4. limitattrname The account policy entry in the user directory defines the time limit for the account lockout policy. This time limit can be set in any time-based attribute, and a policy entry could have multiple time-based attributes in ti. The attribute within the policy to use for the account inactivation limit is defined in the limitattrname attribute in the Account Policy Plug-in, and it is applied globally to all account policies. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example limitattrname: accountInactivityLimit 4.7.5. specattrname There are really two configuration entries for an account policy: the global settings in the plug-in configuration entry and then yser- or subtree-level settings in an entry within the user directory. An account policy can be set directly on a user entry or it can be set as part of a CoS or role configuration. The way that the plug-in identifies which entries are account policy configuration entries is by identifying a specific attribute on the entry which flags it as an account policy. This attribute in the plug-in configuration is is specattrname ; its will usually be set to acctPolicySubentry . Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example specattrname: acctPolicySubentry 4.7.6. stateattrname Account expiration policies are based on some timed criteria for the account. For example, for an inactivity policy, the primary criteria may be the last login time, lastLoginTime . The primary time attribute used to evaluate an account policy is set in the stateattrname attribute. Parameter Description Entry DN cn=config,cn=Account Policy Plugin,cn=plugins,cn=config Valid Range Any time-based entry attribute Default Value None Syntax DirectoryString Example stateattrname: lastLoginTime 4.8. AD DN Plug-in Attributes The AD DN plug-in supports multiple domain configurations. Create one configuration entry for each domain. For details, see the corresponding section in the Red Hat Directory Server Administration Guide . 4.8.1. cn Sets the domain name of the configuration entry. The plug-in uses the domain name from the authenticating user name to select the corresponding configuration entry. Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any string Default Value None Syntax DirectoryString Example cn: example.com 4.8.2. addn_base Sets the base DN under which Directory Server searches the user's DN. Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any valid DN Default Value None Syntax DirectoryString Example addn_base: ou=People,dc=example,dc=com 4.8.3. addn_filter Sets the search filter. Directory Server replaces the %s variable automatically with the non-domain part of the authenticating user. For example, if the user name in the bind is [email protected] , the filter searches the corresponding DN which is (&(objectClass=account)(uid=user_name)) . Parameter Description Entry DN cn= domain_name ,cn=addn,cn=plugins,cn=config Valid Entry Any valid DN Default Value None Syntax DirectoryString Example addn_filter: (&(objectClass=account)(uid=%s)) 4.9. Auto Membership Plug-in Attributes Automembership essentially allows a static group to act like a dynamic group. Different automembership definitions create searches that are automatically run on all new directory entries. The automembership rules search for and identify matching entries - much like the dynamic search filters - and then explicitly add those entries as members to the specified static group. The Auto Membership Plug-in itself is a container entry. Each automember definition is a child of the Auto Membership Plug-in. The automember definition defines the LDAP search base and filter to identify entries and a default group to add them to. Each automember definition can have its own child entry that defines additional conditions for assigning the entry to group. Regular expressions can be used to include or exclude entries and assign them to specific groups based on those conditions. If the entry matches the main definition and not any of the regular expression conditions, then it uses the group in the main definition. If it matches a regular expression condition, then it is added to the regular expression condition group. 4.9.1. autoMemberDefaultGroup This attribute sets a default or fallback group to add the entry to as a member. If only the definition entry is used, then this is the group to which all matching entries are added. If regular expression conditions are used, then this group is used as a fallback if an entry which matches the LDAP search filter do not match any of the regular expressions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any existing Directory Server group Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberDefaultGroup: cn=hostgroups,ou=groups,dc=example,dc=com 4.9.2. autoMemberDefinition (Object Class) This attribute identifies the entry as an automember definition. This entry must be a child of the Auto Membership Plug-in, cn=Auto Membership Plugin,cn=plugins,cn=config . Allowed Attributes autoMemberScope autoMemberFilter autoMemberDefaultGroup autoMemberGroupingAttr 4.9.3. autoMemberExclusiveRegex This attribute sets a single regular expression to use to identify entries to exclude . If an entry matches the exclusion condition, then it is not included in the group. Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is excluded in the group. The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax(3) man page . Note Exclude conditions are evaluated first and take precedence over include conditions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any regular expression Default Value None Single- or Multi-Valued Multi-valued Syntax DirectoryString Example autoMemberExclusiveRegex: fqdn=^www\.web[0-9]+\.example\.com 4.9.4. autoMemberFilter This attribute sets a standard LDAP search filter to use to search for matching entries. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any valid LDAP search filter Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberFilter:objectclass=ntUser 4.9.5. autoMemberGroupingAttr This attribute gives the name of the member attribute in the group entry and the attribute in the object entry that supplies the member attribute value, in the format group_member_attr:entry_attr . This structures how the Automembership Plug-in adds a member to the group, depending on the group configuration. For example, for a groupOfUniqueNames user group, each member is added as a uniqueMember attribute. The value of uniqueMember is the DN of the user entry. In essence, each group member is identified by the attribute-value pair of uniqueMember: user_entry_DN . The member entry format, then, is uniqueMember:dn . Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberGroupingAttr: member:dn 4.9.6. autoMemberInclusiveRegex This attribute sets a single regular expression to use to identify entries to include . Multiple regular expressions could be used, and if an entry matches any one of those expressions, it is included in the group (assuming it does not match an exclude expression). The format of the expression is a Perl-compatible regular expression (PCRE). For more information on PCRE patterns, see the pcresyntax(3) man page . Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any regular expression Default Value None Single- or Multi-Valued Multi-valued Syntax DirectoryString Example autoMemberInclusiveRegex: fqdn=^www\.web[0-9]+\.example\.com 4.9.7. autoMemberProcessModifyOps By default, the Directory Server invokes the Automembership plug-in for add and modify operations. With this setting, the plug-in changes groups when you add a group entry to a user or modify a group entry of a user. If you set the autoMemberProcessModifyOps to off , Directory Server only invokes the Automembership plug-in when you add a group entry to a user. In this case, if an administrator changes a user entry, and that entry impactes what Automembership groups the user belongs to, the plug-in does not remove the user from the old group and only adds the new group. To update the old group, you must then manually run a fix-up task. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Values on | off Default Value on Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberProcessModifyOps: on 4.9.8. autoMemberRegexRule (Object Class) This attribute identifies the entry as a regular expression rule. This entry must be a child of an automember definition ( objectclass: autoMemberDefinition ). Allowed Attributes autoMemberInclusiveRegex autoMemberExclusiveRegex autoMemberTargetGroup 4.9.9. autoMemberScope This attribute sets the subtree DN to search for entries. This is the search base. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server subtree Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberScope: dc=example,dc=com 4.9.10. autoMemberTargetGroup This attribute sets which group to add the entry to as a member, if it meets the regular expression conditions. Parameter Description Entry DN cn=Auto Membership Plugin,cn=plugins,cn=config Valid Range Any Directory Server group Default Value None Single- or Multi-Valued Single Syntax DirectoryString Example autoMemberTargetGroup: cn=webservers,cn=hostgroups,ou=groups,dc=example,dc=com 4.10. Distributed Numeric Assignment Plug-in Attributes The Distributed Numeric Assignment Plug-in manages ranges of numbers and assigns unique numbers within that range to entries. By breaking number assignments into ranges, the Distributed Numeric Assignment Plug-in allows multiple servers to assign numbers without conflict. The plug-in also manages the ranges assigned to servers, so that if one instance runs through its range quickly, it can request additional ranges from the other servers. Distributed numeric assignment can be configured to work with single attribute types or multiple attribute types, and is only applied to specific suffixes and specific entries within the subtree. Distributed numeric assignment is handled per-attribute and is only applied to specific suffixes and specific entries within the subtree. 4.10.1. dnaPluginConfig (Object Class) This object class is used for entries which configure the DNA Plug-in and numeric ranges to assign to entries. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.324 Allowed Attributes dnaType dnaPrefix dnaNextValue dnaMaxValue dnaInterval dnaMagicRegen dnaFilter dnaScope dnaSharedCfgDN dnaThreshold dnaNextRange dnaRangeRequestTimeout cn 4.10.2. dnaFilter This attribute sets an LDAP filter to use to search for and identify the entries to which to apply the distributed numeric assignment range. The dnaFilter attribute is required to set up distributed numeric assignment for an attribute. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any valid LDAP filter Default Value None Syntax DirectoryString Example dnaFilter: (objectclass=person) 4.10.3. dnaInterval This attribute sets an interval to use to increment through numbers in a range. Essentially, this skips numbers at a predefined rate. If the interval is 3 and the first number in the range is 1 , the number used in the range is 4 , then 7 , then 10 , incrementing by three for every new number assignment. In a replication environment, the dnaInterval enables multiple servers to share the same range. However, when you configure different servers that share the same range, set the dnaInterval and dnaNextVal parameters accordingly so that the different servers do not generate the same values. You must also consider this if you add new servers to the replication topology. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any integer Default Value 1 Syntax Integer Example dnaInterval: 1 4.10.4. dnaMagicRegen This attribute sets a user-defined value that instructs the plug-in to assign a new value for the entry. The magic value can be used to assign new unique numbers to existing entries or as a standard setting when adding new entries. The magic entry should be outside of the defined range for the server so that it cannot be triggered by accident. Note that this attribute does not have to be a number when used on a DirectoryString or other character type. However, in most cases the DNA plug-in is used on attributes which only accept integer values, and in such cases the dnamagicregen value must also be an integer. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any string Default Value None Syntax DirectoryString Example dnaMagicRegen: -1 4.10.5. dnaMaxValue This attribute sets the maximum value that can be assigned for the range. The default is -1 , which is the same as setting the highest 64-bit integer. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems; -1 is unlimited Default Value -1 Syntax Integer Example dnaMaxValue: 1000 4.10.6. dnaNextRange This attribute defines the range to use when the current range is exhausted. This value is automatically set when range is transferred between servers, but it can also be manually set to add a range to a server if range requests are not used. The dnaNextRange attribute should be set explicitly only if a separate, specific range has to be assigned to other servers. Any range set in the dnaNextRange attribute must be unique from the available range for the other servers to avoid duplication. If there is no request from the other servers and the server where dnaNextRange is set explicitly has reached its set dnaMaxValue , the set of values (part of the dnaNextRange ) is allocated from this deck. The dnaNextRange allocation is also limited by the dnaThreshold attribute that is set in the DNA configuration. Any range allocated to another server for dnaNextRange cannot violate the threshold for the server, even if the range is available on the deck of dnaNextRange . Note If the dnaNextRange attribute is handled internally if it is not set explicitly. When it is handled automatically, the dnaMaxValue attribute serves as upper limit for the range. The attribute sets the range in the format lower_range-upper_range . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems for the lower and upper ranges Default Value None Syntax DirectoryString Example dnaNextRange: 100-500 4.10.7. dnaNextValue This attribute gives the available number which can be assigned. After being initially set in the configuration entry, this attribute is managed by the Distributed Numeric Assignment Plug-in. The dnaNextValue attribute is required to set up distributed numeric assignment for an attribute. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value -1 Syntax Integer Example dnaNextValue: 1 4.10.8. dnaPrefix This attribute defines a prefix that can be prepended to the generated number values for the attribute. For example, to generate a user ID such as user1000 , the dnaPrefix setting would be user . dnaPrefix can hold any kind of string. However, some possible values for dnaType (such as uidNumber and gidNumber ) require only integer values. To use a prefix string, consider using a custom attribute for dnaType which allows strings. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any string Default Value None Example dnaPrefix: id 4.10.9. dnaRangeRequestTimeout One potential situation with the Distributed Numeric Assignment Plug-in is that one server begins to run out of numbers to assign. The dnaThreshold attribute sets a threshold of available numbers in the range, so that the server can request an additional range from the other servers before it is unable to perform number assignments. The dnaRangeRequestTimeout attribute sets a timeout period, in seconds, for range requests so that the server does not stall waiting on a new range from one server and can request a range from a new server. For range requests to be performed, the dnaSharedCfgDN attribute must be set. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value 10 Syntax Integer Example dnaRangeRequestTimeout: 15 4.10.10. dnaScope This attribute sets the base DN to search for entries to which to apply the distributed numeric assignment. This is analogous to the base DN in an ldapsearch . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry Default Value None Syntax DirectoryString Example dnaScope: ou=people,dc=example,dc=com 4.10.11. dnaSharedCfgDN This attribute defines a shared identity that the servers can use to transfer ranges to one another. This entry is replicated between servers and is managed by the plug-in to let the other servers know what ranges are available. This attribute must be set for range transfers to be enabled. Note The shared configuration entry must be configured in the replicated subtree, so that the entry can be replicated to the servers. For example, if the ou=People,dc=example,dc=com subtree is replicated, then the configuration entry must be in that subtree, such as ou=UID Number Ranges , ou=People,dc=example,dc=com . The entry identified by this setting must be manually created by the administrator. The server will automatically contain a sub-entry beneath it to transfer ranges. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any DN Default Value None Syntax DN Example dnaSharedCfgDN: cn=range transfer user,cn=config 4.10.12. dnaThreshold One potential situation with the Distributed Numeric Assignment Plug-in is that one server begins to run out of numbers to assign, which can cause problems. The Distributed Numeric Assignment Plug-in allows the server to request a new range from the available ranges on other servers. So that the server can recognize when it is reaching the end of its assigned range, the dnaThreshold attribute sets a threshold of remaining available numbers in the range. When the server hits the threshold, it sends a request for a new range. For range requests to be performed, the dnaSharedCfgDN attribute must be set. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range 1 to the maximum 32-bit integer on 32-bit systems and to the maximum 64-bit integer on 64-bit systems Default Value 100 Syntax Integer Example dnaThreshold: 100 4.10.13. dnaType This attribute sets which attributes have unique numbers being generated for them. In this case, whenever the attribute is added to the entry with the magic number, an assigned value is automatically supplied. This attribute is required to set a distributed numeric assignment for an attribute. If the dnaPrefix attribute is set, then the prefix value is prepended to whatever value is generated by dnaType . The dnaPrefix value can be any kind of string, but some reasonable values for dnaType (such as uidNumber and gidNumber ) require only integer values. To use a prefix string, consider using a custom attribute for dnaType which allows strings. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Example dnaType: uidNumber 4.10.14. dnaSharedConfig (Object Class) This object class is used to configure the shared configuration entry that is replicated between suppliers that are all using the same DNA Plug-in configuration for numeric assignements. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.325 Allowed Attributes dnaHostname dnaPortNum dnaSecurePortNum dnaRemainingValues 4.10.15. dnaHostname This attribute identifies the host name of a server in a shared range, as part of the DNA range configuration for that specific host in multi-supplier replication. Available ranges are tracked by host and the range information is replicated among all suppliers so that if any supplier runs low on available numbers, it can use the host information to contact another supplier and request an new range. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString Valid Range Any valid host name Default Value None Example dnahostname: ldap1.example.com 4.10.16. dnaPortNum This attribute gives the standard port number to use to connect to the host identified in dnaHostname . Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax Integer Valid Range 0 to 65535 Default Value 389 Example dnaPortNum: 389 4.10.17. dnaRemainingValues This attribute contains the number of values that are remaining and available to a server to assign to entries. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax Integer Valid Range Any integer Default Value None Example dnaRemainingValues: 1000 4.10.18. dnaRemoteBindCred Specifies the Replication Manager's password. If you set a bind method in the dnaRemoteBindMethod attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration entry under the cn=config entry. Set the parameter in plain text. The value is automatically AES-encrypted before it is stored. A server restart is required for the change to take effect. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString {AES} encrypted_password Valid Values Any valid AES-encrypted password. Default Value Example dnaRemoteBindCred: {AES-TUhNR0NTcUdTSWIzRFFFRkRUQm1NRVVHQ1NxR1NJYjNEUUVGRERBNEJDUmxObUk0WXpjM1l5MHdaVE5rTXpZNA0KTnkxaE9XSmhORGRoT0MwMk1ESmpNV014TUFBQ0FRSUNBU0F3Q2dZSUtvWklodmNOQWdjd0hRWUpZSVpJQVdVRA0KQkFFcUJCQk5KbUFDUWFOMHlITWdsUVp3QjBJOQ==}bBR3On6cBmw0DdhcRx826g== 4.10.19. dnaRemoteBindDN Specifies the Replication Manager DN. If you set a bind method in the dnaRemoteBindMethod attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration under the cn=config entry. A server restart is required for the change to take effect. Parameter Description Entry DN cn= DNA_config_entry ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config Syntax DirectoryString Valid Values Any valid Replication Manager DN. Default Value Example dnaRemoteBindDN: cn=replication manager,cn=config 4.10.20. dnaRemoteBindMethod Specifies the remote bind method. If you set a bind method in this attribute that requires authentication, additionally set the dnaRemoteBindDN and dnaRemoteBindCred parameter for every server in the replication deployment in the plug-in configuration entry under the cn=config entry. A server restart is required for the change to take effect. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax DirectoryString Valid Values SIMPLE | SSL | SASL/GSSAPI | SASL/DIGEST-MD5 Default Value Example dnaRemoteBindMethod: SIMPLE 4.10.21. dnaRemoteConnProtocol Specifies the remote connection protocol. A server restart is required for the change to take effect. Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax DirectoryString Valid Values LDAP , SSL , or TLS Default Value Example dnaRemoteConnProtocol: LDAP 4.10.22. dnaSecurePortNum This attribute gives the secure (TLS) port number to use to connect to the host identified in dnaHostname . Parameter Description Entry DN dnaHostname= host_name +dnaPortNum= port_number ,ou=ranges,dc=example,dc=com Syntax Integer Valid Range 0 to 65535 Default Value 636 Example dnaSecurePortNum: 636 4.11. Linked Attributes Plug-in Attributes Many times, entries have inherent relationships to each other (such as managers and employees, document entries and their authors, or special groups and group members). While attributes exist that reflect these relationships, these attributes have to be added and updated on each entry manually. That can lead to a whimsically inconsistent set of directory data, where these entry relationships are unclear, outdated, or missing. The Linked Attributes Plug-in allows one attribute, set in one entry, to update another attribute in another entry automatically. The first attribute has a DN value, which points to the entry to update; the second entry attribute also has a DN value which is a back-pointer to the first entry. The link attribute which is set by users and the dynamically-updated "managed" attribute in the affected entries are both defined by administrators in the Linked Attributes Plug-in instance. Conceptually, this is similar to the way that the MemberOf Plug-in uses the member attribute in group entries to set memberOf attribute in user entries. Only with the Linked Attributes Plug-in, all of the link/managed attributes are user-defined and there can be multiple instances of the plug-in, each reflecting different link-managed relationships. There are a couple of caveats for linking attributes: Both the link attribute and the managed attribute must have DNs as values. The DN in the link attribute points to the entry to add the managed attribute to. The managed attribute contains the linked entry DN as its value. The managed attribute must be multi-valued. Otherwise, if multiple link attributes point to the same managed entry, the managed attribute value would not be updated accurately. 4.11.1. linkScope This restricts the scope of the plug-in, so it operates only in a specific subtree or suffix. If no scope is given, then the plug-in will update any part of the directory tree. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any DN Default Value None Syntax DN Example linkScope: ou=People,dc=example,dc=com 4.11.2. linkType This sets the user-managed attribute. This attribute is modified and maintained by users, and then when this attribute value changes, the linked attribute is automatically updated in the targeted entries. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Syntax DirectoryString Example linkType: directReport 4.11.3. managedType This sets the managed, or plug-in maintained, attribute. This attribute is managed dynamically by the Linked Attributes Plug-in instance. Whenever a change is made to the managed attribute, then the plug-in updates all of the linked attributes on the targeted entries. Parameter Description Entry DN cn= plugin_instance ,cn=Linked Attributes,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value None Syntax DN Example managedType: manager 4.12. Managed Entries Plug-in Attributes In some unique circumstances, it is useful to have an entry created automatically when another entry is created. For example, this can be part of Posix integration by creating a specific group entry when a new user is created. Each instance of the Managed Entries Plug-in identifies two areas: The scope of the plug-in, meaning the subtree and the search filter to use to identify entries which require a corresponding managed entry A template entry that defines what the managed entry should look like 4.12.1. managedBase This attribute sets the subtree under which to create the managed entries. This can be any entry in the directory tree. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server subtree Default Value None Syntax DirectoryString Example managedBase: ou=groups,dc=example,dc=com 4.12.2. managedTemplate This attribute identifies the template entry to use to create the managed entry. This entry can be located anywhere in the directory tree; however, it is recommended that this entry is in a replicated suffix so that all suppliers and consumers in replication are using the same template. The attributes used to create the managed entry template are described in the Red Hat Directory Server Configuration, Command, and File Reference . Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server entry of the mepTemplateEntry object class Default Value None Syntax DirectoryString Example managedTemplate: cn=My Template,ou=Templates,dc=example,dc=com 4.12.3. originFilter This attribute sets the search filter to use to search for and identify the entries within the subtree which require a managed entry. The filter allows the managed entries behavior to be limited to a specific type of entry or subset of entries. The syntax is the same as a regular search filter. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any valid LDAP filter Default Value None Syntax DirectoryString Example originFilter: objectclass=posixAccount 4.12.4. originScope This attribute sets the scope of the search to use to see which entries the plug-in monitors. If a new entry is created within the scope subtree, then the Managed Entries Plug-in creates a new managed entry that corresponds to it. Parameter Description Entry DN cn= instance_name ,cn=Managed Entries Plugin,cn=plugins,cn=config Valid Values Any Directory Server subtree Default Value None Syntax DirectoryString Example originScope: ou=people,dc=example,dc=com 4.13. MemberOf Plug-in Attributes Group membership is defined within group entries using attributes such as member . Searching for the member attribute makes it easy to list all of the members for the group. However, group membership is not reflected in the member's user entry, so it is impossible to tell to what groups a person belongs by looking at the user's entry. The MemberOf Plug-in synchronizes the group membership in group members with the members' individual directory entries by identifying changes to a specific member attribute (such as member ) in the group entry and then working back to write the membership changes over to a specific attribute in the members' user entries. 4.13.1. cn Sets the name of the plug-in instance. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values Any valid string Default Value Syntax DirectoryString Example cn: Example MemberOf Plugin Instance 4.13.2. memberOfAllBackends This attribute specifies whether to search the local suffix for user entries or all available suffixes. This can be desirable in directory trees where users may be distributed across multiple databases so that group membership is evaluated comprehensively and consistently. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example memberOfAllBackends: on 4.13.3. memberOfAttr This attribute specifies the attribute in the user entry for the Directory Server to manage to reflect group membership. The MemberOf Plug-in generates the value of the attribute specified here in the directory entry for the member. There is a separate attribute for every group to which the user belongs. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value memberOf Syntax DirectoryString Example memberOfAttr: memberOf 4.13.4. memberOfAutoAddOC To enable the memberOf plug-in to add the memberOf attribute to a user, the user object must contain an object class that allows this attribute. If an entry does not have an object class that allows the memberOf attribute then the memberOf plugin will automatically add the object class listed in the memberOfAutoAddOC parameter. This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Values Any Directory Server object class Default Value nsMemberOf Syntax DirectoryString Example memberOfAutoAddOC: nsMemberOf 4.13.5. memberOfEntryScope If you configured several back ends or multiple-nested suffixes, the multi-valued memberOfEntryScope parameter enables you to set what suffixes the MemberOf plug-in works on. If the parameter is not set, the plug-in works on all suffixes. The value set in the memberOfEntryScopeExcludeSubtree parameter has a higher priority than values set in memberOfEntryScope . For further details, see the corresponding section in the Directory Server Administration Guide . This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry DN. Default Value Syntax DirectoryString Example memberOfEntryScope: ou=people,dc=example,dc=com 4.13.6. memberOfEntryScopeExcludeSubtree If you configured several back ends or multiple-nested suffixes, the multi-valued memberOfEntryScopeExcludeSubtree parameter enables you to set what suffixes the MemberOf plug-in excludes. The value set in the memberOfEntryScopeExcludeSubtree parameter has a higher priority than values set in memberOfEntryScope . If the scopes set in both parameters overlap, the MemberOf plug-in only works on the non-overlapping directory entries. For further details, see the corresponding section in the Directory Server Administration Guide . This setting does not require restarting the server to take effect. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server entry DN. Default Value Syntax DirectoryString Example memberOfEntryScopeExcludeSubtree: ou=sample,dc=example,dc=com 4.13.7. memberOfGroupAttr This attribute specifies the attribute in the group entry to use to identify the DNs of group members. By default, this is the member attribute, but it can be any membership-related attribute that contains a DN value, such as uniquemember or member . Note Any attribute can be used for the memberOfGroupAttr value, but the MemberOf Plug-in only works if the value of the target attribute contains the DN of the member entry. For example, the member attribute contains the DN of the member's user entry: Some member-related attributes do not contain a DN, like the memberURL attribute. That attribute will not work as a value for memberOfGroupAttr . The memberURL value is a URL, and a non-DN value cannot work with the MemberOf Plug-in. Parameter Description Entry DN cn=MemberOf Plugin,cn=plugins,cn=config Valid Range Any Directory Server attribute Default Value member Syntax DirectoryString Example memberOfGroupAttr: member 4.14. Attribute Uniqueness Plug-in Attributes The Attribute Uniqueness plug-in ensures that the value of an attribute is unique across the directory or subtree. 4.14.1. cn Sets the name of the Attribute Uniqueness plug-in configuration record. You can use any string, but Red Hat recommends naming the configuration record attribute_name Attribute Uniqueness . Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid string Default Value None Syntax DirectoryString Example cn: mail Attribute Uniqueness 4.14.2. uniqueness-attribute-name Sets the name of the attribute whose values must be unique. This attribute is multi-valued. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example uniqueness-attribute-name: mail 4.14.3. uniqueness-subtrees Sets the DN under which the plug-in checks for uniqueness of the attribute's value. This attribute is multi-valued. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid subtree DN Default Value None Syntax DirectoryString Example uniqueness-subtrees: ou=Sales,dc=example,dc=com 4.14.4. uniqueness-across-all-subtrees If enabled ( on ), the plug-in checks that the attribute is unique across all subtrees set. If you set the attribute to off , uniqueness is only enforced within the subtree of the updated entry. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values on | off Default Value off Syntax DirectoryString Example uniqueness-across-all-subtrees: off 4.14.5. uniqueness-top-entry-oc Directory Server searches this object class in the parent entry of the updated object. If it was not found, the search continues at the higher level entry up to the root of the directory tree. If the object class was found, Directory Server verifies that the value of the attribute set in uniqueness-attribute-name is unique in this subtree. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid object class Default Value None Syntax DirectoryString Example uniqueness-top-entry-oc: nsContainer 4.14.6. uniqueness-subtree-entries-oc Optionally, when using the uniqueness-top-entry-oc parameter, you can configure that the Attribute Uniqueness plug-in only verifies if an attribute is unique, if the entry contains the object class set in this parameter. Parameter Description Entry DN cn= attribute_uniqueness_configuration_record_name ,cn=plugins,cn=config Valid Values Any valid object class Default Value None Syntax DirectoryString Example uniqueness-subtree-entries-oc: inetOrgPerson 4.15. Posix Winsync API Plug-in Attributes By default, Posix-related attributes are not synchronized between Active Directory and Red Hat Directory Server. On Linux systems, system users and groups are identified as Posix entries, and LDAP Posix attributes contain that required information. However, when Windows users are synced over, they have ntUser and ntGroup attributes automatically added which identify them as Windows accounts, but no Posix attributes are synced over (even if they exist on the Active Directory entry) and no Posix attributes are added on the Directory Server side. The Posix Winsync API Plug-in synchronizes POSIX attributes between Active Directory and Directory Server entries. Note All POSIX attributes (such as uidNumber , gidNumber , and homeDirectory ) are synchronized between Active Directory and Directory Server entries. However, if a new POSIX entry or POSIX attributes are added to an existing entry in the Directory Server, only the POSIX attributes are synchronized over to the Active Directory corresponding entry . The POSIX object class ( posixAccount for users and posixGroup for groups) is not added to the Active Directory entry. This plug-in is disabled by default and must be enabled before any Posix attributes will be synchronized from the Active Directory entry to the Directory Server entry. 4.15.1. posixWinsyncCreateMemberOfTask This attribute sets whether to run the memberOf fix-up task immediately after a sync run in order to update group memberships for synced users. This is disabled by default because the memberOf fix-up task can be resource-intensive and cause performance issues if it is run too frequently. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncCreateMemberOfTask: false 4.15.2. posixWinsyncLowerCaseUID This attribute sets whether to store (and, if necessary, convert) the UID value in the memberUID attribute in lower case. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncLowerCaseUID: false 4.15.3. posixWinsyncMapMemberUID This attribute sets whether to map the memberUID attribute in an Active Directory group to the uniqueMember attribute in a Directory Server group. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value true Example posixWinsyncMapMemberUID: false 4.15.4. posixWinsyncMapNestedGrouping The posixWinsyncMapNestedGrouping parameter manages if nested groups are updated when memberUID attributes in an Active Directory POSIX group change. Updating nested groups is supported up a depth of five levels. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncMapNestedGrouping: false 4.15.5. posixWinsyncMsSFUSchema This attribute sets whether to the older Microsoft System Services for Unix 3.0 (msSFU30) schema when syncing Posix attributes from Active Directory. By default, the Posix Winsync API Plug-in uses Posix schema for modern Active Directory servers: 2005, 2008, and later versions. There are slight differences between the modern Active Directory Posix schema and the Posix schema used by Windows Server 2003 and older Windows servers. If an Active Directory domain is using the older-style schema, then the older-style schema can be used instead. Parameter Description Entry DN cn=Posix Winsync API Plugin,cn=plugins,cn=config Valid Range true | false Default Value false Example posixWinsyncMsSFUSchema: true 4.16. Retro Changelog Plug-in Attributes Two different types of changelogs are maintained by Directory Server. The first type, referred to as simply a changelog , is used by multi-supplier replication, and the second changelog, a plug-in referred to as the retro changelog , is intended for use by LDAP clients for maintaining application compatibility with Directory Server 4.x versions. This Retro Changelog Plug-in is used to record modifications made to a supplier server. When the supplier server's directory is modified, an entry is written to the Retro Changelog that contains both of the following: A number that uniquely identifies the modification. This number is sequential with respect to other entries in the changelog. The modification action; that is, exactly how the directory was modified. It is through the Retro Changelog Plug-in that the changes performed to the Directory Server are accessed using searches to cn=changelog suffix. 4.16.1. isReplicated This optional attribute sets a flag to indicate on a change in the changelog whether the change is newly made on that server or whether it was replicated over from another server. Parameter Description OID 2.16.840.1.113730.3.1.2085 Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values true | false Default Value None Syntax Boolean Example isReplicated: true 4.16.2. nsslapd-attribute This attribute explicitly specifies another Directory Server attribute which must be included in the retro changelog entries. Many operational attributes and other types of attributes are commonly excluded from the retro changelog, but these attributes may need to be present for a third-party application to use the changelog data. This is done by listing the attribute in the retro changelog plug-in configuration using the nsslapd-attribute parameter. It is also possible to specify an optional alias for the specified attribute within the nsslapd-attribute value. Using an alias for the attribute can help avoid conflicts with other attributes in an external server or application which may use the retro changelog records. Note Setting the value of the nsslapd-attribute attribute to isReplicated is a way of indicating, in the retro changelog entry itself, whether the modification was done on the local server (that is, whether the change is an original change) or whether the change was replicated over to the server. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid directory attribute (standard or custom) Default Value None Syntax DirectoryString Example nsslapd-attribute: nsUniqueId: uniqueID 4.16.3. nsslapd-changelogdir This attribute specifies the name of the directory in which the changelog database is created the first time the plug-in is run. By default, the database is stored with all the other databases under /var/lib/dirsrv/slapd- instance /changelogdb . Note For performance reasons, store this database on a different physical disk. The server has to be restarted for changes to this attribute to go into effect. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid path to the directory Default Value None Syntax DirectoryString Example nsslapd-changelogdir: /var/lib/dirsrv/slapd- instance /changelogdb 4.16.4. nsslapd-changelogmaxage (Max Changelog Age) This attribute specifies the maximum age of any entry in the changelog. The changelog contains a record for each directory modification and is used when synchronizing consumer servers. Each record contains a timestamp. Any record with a timestamp that is older than the value specified in this attribute is removed. If nsslapd-changelogmaxage attribute is absent, there is no age limit on changelog records. Note Expired changelog records will not be removed if there is an agreement that has fallen behind further than the maximum age. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Range 0 (meaning that entries are not removed according to their age) to the maximum 32 bit integer value (2147483647) Default Value 7d Syntax DirectoryString Integer AgeID AgeID is s (S) for seconds, m (M) for minutes, h (H) for hours, d (D) for days, w (W) for weeks. Example nsslapd-changelogmaxage: 30d 4.16.5. nsslapd-exclude-attrs The nsslapd-exclude-attrs parameter stores an attribute name to exclude from the retro changelog database. To exclude multiple attributes, add one nsslapd-exclude-attrs parameter for each attribute to exclude. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example nsslapd-exclude-attrs: example 4.16.6. nsslapd-exclude-suffix The nsslapd-exclude-suffix parameter stores a suffix to exclude from the retro changelog database. You can add the parameter multiple times to exclude multiple suffixes. Parameter Description Entry DN cn=Retro Changelog Plugin,cn=plugins,cn=config Valid Values Any valid attribute name Default Value None Syntax DirectoryString Example nsslapd-exclude-suffix: ou=demo,dc=example,dc=com 4.17. RootDN Access Control Plug-in Attributes The root DN, cn=Directory Manager, is a special user entry that is defined outside the normal user database. Normal access control rules are not applied to the root DN, but because of the powerful nature of the root user, it can be beneficial to apply some kind of access control rules to the root user. The RootDN Access Control Plug-in sets normal access controls - host and IP address restrictions, time-of-day restrictions, and day of week restrictions - on the root user. This plug-in is disabled by default. 4.17.1. rootdn-allow-host This sets what hosts, by fully-qualified domain name, the root user is allowed to use to access the Directory Server. Any hosts not listed are implicitly denied. Wild cards are allowed. This attribute can be used multiple times to specify multiple hosts, domains, or subdomains. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid host name or domain, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-allow-host: *.example.com 4.17.2. rootdn-allow-ip This sets what IP addresses, either IPv4 or IPv6, for machines the root user is allowed to use to access the Directory Server. Any IP addresses not listed are implicitly denied. Wild cards are allowed. This attribute can be used multiple times to specify multiple addresses, domains, or subnets. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid IPv4 or IPv6 address, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-allow-ip: 192.168. . 4.17.3. rootdn-close-time This sets part of a time period or range when the root user is allowed to access the Directory Server. This sets when the time-based access ends , when the root user is no longer allowed to access the Directory Server. This is used in conjunction with the rootdn-open-time attribute. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid time, in a 24-hour format Default Value None Syntax Integer Example rootdn-close-time: 1700 4.17.4. rootdn-days-allowed This gives a comma-separated list of what days the root user is allowed to use to access the Directory Server. Any days listed are implicitly denied. This can be used with rootdn-close-time and rootdn-open-time to combine time-based access and days-of-week or it can be used by itself (with all hours allowed on allowed days). Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Values * Sun * Mon * Tue * Wed * Thu * Fri * Sat Default Value None Syntax DirectoryString Example rootdn-days-allowed: Mon, Tue, Wed, Thu, Fri 4.17.5. rootdn-deny-ip This sets what IP addresses, either IPv4 or IPv6, for machines the root user is not allowed to use to access the Directory Server. Any IP addresses not listed are implicitly allowed. Note Deny rules supercede allow rules, so if an IP address is listed in both the rootdn-allow-ip and rootdn-deny-ip attributes, it is denied access. Wild cards are allowed. This attribute can be used multiple times to specify multiple addresses, domains, or subnets. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid IPv4 or IPv6 address, including asterisks (*) for wildcards Default Value None Syntax DirectoryString Example rootdn-deny-ip: 192.168.0.0 4.17.6. rootdn-open-time This sets part of a time period or range when the root user is allowed to access the Directory Server. This sets when the time-based access begins . This is used in conjunction with the rootdn-close-time attribute. Parameter Description Entry DN cn=RootDN Access Control Plugin,cn=plugins,cn=config Valid Range Any valid time, in a 24-hour format Default Value None Syntax Integer Example rootdn-open-time: 0800 4.18. Referential Integrity Plug-in Attributes Referential Integrity ensures that when you perform update or remove operations to an entry in the the directory, the server also updates information for entries that reference removed/updated one. For example, if a user's entry is removed from the directory and Referential Integrity is enabled, the server also removes the user from any groups where the user is a member. 4.18.1. nsslapd-pluginAllowReplUpdates Referential Integrity can be a very resource demanding procedure. So if you configured multi-supplier replication the Referential Integrity plug-in will ignore replicated updates by default. However, sometimes it is not possible to enable the Referential Integrity plug-in, or the plug-in is not available. For example, one of your suppliers in the replication topology is Active Directory (see chapter Windows Synchronization for more details) that does not support Referential Integrity. In cases like this you can allow the Referential Integrity plug-in on another supplier to process replicated updates using nsslapd-pluginAllowReplUpdates attribute. Important Only one supplier must have the nsslapd-pluginAllowReplUpdates attribute value on in multi-supplier replication topology. Otherwise, it can lead to replication errors, and requires a full initialization to fix the problem. On the other hand, the Referential Integrity plug-in must be enabled on all supplies where possible. Parameter Description Entry DN cn=referential integrity postoperation,cn=plugins,cn=config Valid Values on/off Default Value off Syntax Boolean Example nsslapd-pluginAllowReplUpdates: on
[ "dn: cn=Telephone Syntax,cn=plugins,cn=config objectclass: top objectclass: nsSlapdPlugin objectclass: extensibleObject cn: Telephone Syntax nsslapd-pluginPath: libsyntax-plugin nsslapd-pluginInitfunc: tel_init nsslapd-pluginType: syntax nsslapd-pluginEnabled: on", "dn:cn=ACL Plugin,cn=plugins,cn=config objectclass:top objectclass:nsSlapdPlugin objectclass:extensibleObject", "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -b \"cn=Password Storage Schemes,cn=plugins,cn=config\" -s sub \"(objectclass=*)\" dn", "(modifyTimestamp>=20200101010101Z)", "nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40", "nsslapd-cache-autosize: 10 nsslapd-cache-autosize-split: 40", "dn:cn=aci,cn=index,cn=UserRoot,cn=ldbm database,cn=plugins,cn=config objectclass:top objectclass:nsIndex cn:aci nsSystemIndex:true nsIndexType:pres", "abc*", "*xyz", "ab*z", "dn:cn=userPassword,cn=encrypted attributes,o=UserRoot,cn=ldbm database, cn=plugins,cn=config objectclass:top objectclass:nsAttributeEncryption cn:userPassword nsEncryptionAlgorithm:AES", "dn: cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: PAM Pass Through Auth nsslapd-pluginPath: libpam-passthru-plugin nsslapd-pluginInitfunc: pam_passthruauth_init nsslapd-pluginType: preoperation nsslapd-pluginEnabled: on nsslapd-pluginLoadGlobal: true nsslapd-plugin-depends-on-type: database nsslapd-pluginId: pam_passthruauth nsslapd-pluginVersion: 9.0.0 nsslapd-pluginVendor: Red Hat nsslapd-pluginDescription: PAM pass through authentication plugin dn: cn=Example PAM Config,cn=PAM Pass Through Auth,cn=plugins,cn=config objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject objectClass: pamConfig cn: Example PAM Config pamMissingSuffix: ALLOW pamExcludeSuffix: cn=config pamIDMapMethod: RDN ou=people,dc=example,dc=com pamIDMapMethod: ENTRY ou=engineering,dc=example,dc=com pamIDAttr: customPamUid pamFilter: (manager=uid=bjensen,ou=people,dc=example,dc=com) pamFallback: FALSE pamSecure: TRUE pamService: ldapserver", "pamIDMapMethod: RDN pamSecure: FALSE pamService: ldapserver", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: cn=Example PAM config entry,cn=PAM Pass Through Auth,cn=plugins,cn=config changetype: modify add: pamModuleIsThreadSafe pamModuleIsThreadSafe: on", "dn: cn=Account Policy Plugin,cn=plugins,cn=config nsslapd-pluginarg0: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config", "dn: cn=config,cn=Account Policy Plugin,cn=plugins,cn=config objectClass: top objectClass: extensibleObject cn: config ... attributes for evaluating accounts alwaysRecordLogin: yes stateattrname: lastLoginTime altstateattrname: createTimestamp ... attributes for account policy entries specattrname: acctPolicySubentry limitattrname: accountInactivityLimit", "dn: cn=AccountPolicy,dc=example,dc=com objectClass: top objectClass: ldapsubentry objectClass: extensibleObject objectClass: accountpolicy 86400 seconds per day * 30 days = 2592000 seconds accountInactivityLimit: 2592000 cn: AccountPolicy", "dn: uid=scarter,ou=people,dc=example,dc=com lastLoginTime: 20060527001051Z acctPolicySubentry: cn=AccountPolicy,dc=example,dc=com", "dn: cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberDefinition cn: Hostgroups autoMemberScope: dc=example,dc=com autoMemberFilter: objectclass=ipHost autoMemberDefaultGroup: cn=systems,cn=hostgroups,ou=groups,dc=example,dc=com autoMemberGroupingAttr: member:dn", "dn: cn=webservers,cn=Hostgroups,cn=Auto Membership Plugin,cn=plugins,cn=config objectclass: autoMemberRegexRule description: Group for webservers cn: webservers autoMemberTargetGroup: cn=webservers,cn=hostgroups,dc=example,dc=com autoMemberInclusiveRegex: fqdn=^www\\.web[0-9]+\\.example\\.com", "member: uid=jsmith,ou=People,dc=example,dc=com", "nsslapd-attribute: attribute : alias" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/Plug_in_Implemented_Server_Functionality_Reference
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere .
[ "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule", "Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_osp
probe::netfilter.ip.post_routing
probe::netfilter.ip.post_routing Name probe::netfilter.ip.post_routing - Called immediately before an outgoing IP packet leaves the computer Synopsis netfilter.ip.post_routing Values family IP address family outdev Address of net_device representing output device, 0 if unknown nf_repeat Constant used to signify a 'repeat' verdict ipproto_tcp Constant used to signify that the packet protocol is TCP indev_name Name of network device packet was received on (if known) nf_stolen Constant used to signify a 'stolen' verdict length The length of the packet buffer contents, in bytes urg TCP URG flag (if protocol is TCP; ipv4 only) psh TCP PSH flag (if protocol is TCP; ipv4 only) rst TCP RST flag (if protocol is TCP; ipv4 only) protocol Packet protocol from driver (ipv4 only) nf_stop Constant used to signify a 'stop' verdict nf_accept Constant used to signify an 'accept' verdict outdev_name Name of network device packet will be routed to (if known) iphdr Address of IP header fin TCP FIN flag (if protocol is TCP; ipv4 only) syn TCP SYN flag (if protocol is TCP; ipv4 only) ipproto_udp Constant used to signify that the packet protocol is UDP ack TCP ACK flag (if protocol is TCP; ipv4 only) nf_queue Constant used to signify a 'queue' verdict dport TCP or UDP destination port (ipv4 only) pf Protocol family -- either " ipv4 " or " ipv6 " daddr A string representing the destination IP address nf_drop Constant used to signify a 'drop' verdict indev Address of net_device representing input device, 0 if unknown saddr A string representing the source IP address sport TCP or UDP source port (ipv4 only)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netfilter-ip-post-routing
Chapter 7. Managing logical volumes using the Web Console
Chapter 7. Managing logical volumes using the Web Console 7.1. Activating a logical volume using the Web Console Follow these instructions to activate a logical volume using the Web Console. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click the logical volume. Click Activate . 7.2. Creating a thinly provisioned logical volume using the Web Console Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click Create Thin Volume beside the thin pool that should host the volume. Figure 7.1. A thin pool The Create Thin Volume window opens. Specify a Name for the new volume. Specify a Size for the new volume. Click Create . The new volume appears in the list of logical volumes. 7.3. Creating a thickly provisioned logical volume using the Web Console Follow these instructions to create a logical thin pool using the Web Console. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click + Create new Logical Volume . The Create Logical Volume window opens. Figure 7.2. The Create Logical Volume window Specify a Name for your logical volume. Set Purpose to Block device for file systems . Specify a Size for your logical volume. Click Create . Your new logical volume appears in the list of logical volumes in this volume group. 7.4. Deactivating a logical volume using the Web Console Follow these instructions to deactivate a logical volume using the Web Console. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click the logical volume. Click Deactivate . Figure 7.3. The logical volume summary 7.5. Deleting a logical volume using the Web Console Follow these instructions to delete a thinly- or thickly-provisioned logical volume. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click the logical volume. Click Delete in the logical volume summary. Click Delete to confirm deletion. 7.6. Growing a logical volume using the Web Console Follow these instructions to increase the size of a logical volume using the Web Console. Log in to the Web Console. Click the hostname Storage . Click the volume group. The Volume Group Overview page opens. Click the logical volume. On the Volume subtab, click Grow . Figure 7.4. Logical Volume section expanded The Grow Logical Volume window opens. Figure 7.5. The Grow Logical Volume window Specify the new Size of the logical volume. Click Grow .
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/managing_red_hat_gluster_storage_using_the_web_console/assembly-cockpit-managing_lv
Chapter 1. Installing a cluster on any platform
Chapter 1. Installing a cluster on any platform In OpenShift Container Platform version 4.15, you can install a cluster on any infrastructure that you provision, including virtualization and cloud environments. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 1.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 1.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 1.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 1.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 1.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 1.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 1.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 1.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 1.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 1.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 1.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 1.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 1.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 1.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 1.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 1.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 1.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 1.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 1.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 1.9.1. Sample install-config.yaml file for other platforms You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 1.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 1.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 1.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 1.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 1.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 1.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 1.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 1.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 1.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 1.11.3.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 1.11.3.4.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 1.11.3.4.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 1.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 1.11.3.4.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 1.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 1.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 1.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 1.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 1.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 1.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 1.15.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 1.15.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 1.15.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 1.15.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 1.15.3.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 1.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 1.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 1.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_any_platform/installing-platform-agnostic
10.10. Quorum Disk Does Not Appear as Cluster Member
10.10. Quorum Disk Does Not Appear as Cluster Member If you have configured your system to use a quorum disk but the quorum disk does not appear as a member of your cluster, you can perform the following steps. Review the /var/log/cluster/qdiskd.log file. Run ps -ef | grep qdisk to determine if the process is running. Ensure that <quorumd...> is configured correctly in the /etc/cluster/cluster.conf file. Enable debugging output for the qdiskd daemon. For information on enabling debugging in the /etc/cluster/cluster.conf file, see Section 8.7, "Configuring Debug Options" . For information on enabling debugging using luci , see Section 4.5.6, "Logging Configuration" . For information on enabling debugging with the ccs command, see Section 6.14.4, "Logging" . Note that it may take multiple minutes for the quorum disk to register with the cluster. This is normal and expected behavior.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-qdisknotmember-ca
Installing on OCI
Installing on OCI OpenShift Container Platform 4.15 Installing OpenShift Container Platform on Oracle Cloud Infrastructure Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_oci/index
Jenkins
Jenkins OpenShift Container Platform 4.18 Jenkins Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/jenkins/index
Chapter 4. Viewing application composition by using the Topology view
Chapter 4. Viewing application composition by using the Topology view The Topology view in the Developer perspective of the web console provides a visual representation of all the applications within a project, their build status, and the components and services associated with them. 4.1. Prerequisites To view your applications in the Topology view and interact with them, ensure that: You have logged in to the web console . You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. You are in the Developer perspective . 4.2. Viewing the topology of your application You can navigate to the Topology view using the left navigation panel in the Developer perspective. After you deploy an application, you are directed automatically to the Graph view where you can see the status of the application pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application. The Topology view provides you the option to monitor your applications using the List view. Use the List view icon ( ) to see a list of all your applications and use the Graph view icon ( ) to switch back to the graph view. You can customize the views as required using the following: Use the Find by name field to find the required components. Search results may appear outside of the visible area; click Fit to Screen from the lower-left toolbar to resize the Topology view to show all components. Use the Display Options drop-down list to configure the Topology view of the various application groupings. The options are available depending on the types of components deployed in the project: Expand group Virtual Machines: Toggle to show or hide the virtual machines. Application Groupings: Clear to condense the application groups into cards with an overview of an application group and alerts associated with it. Helm Releases: Clear to condense the components deployed as Helm Release into cards with an overview of a given release. Knative Services: Clear to condense the Knative Service components into cards with an overview of a given component. Operator Groupings: Clear to condense the components deployed with an Operator into cards with an overview of the given group. Show elements based on Pod Count or Labels Pod Count: Select to show the number of pods of a component in the component icon. Labels: Toggle to show or hide the component labels. The Topology view also provides you the Export application option to download your application in the ZIP file format. You can then import the downloaded application to another project or cluster. For more details, see Exporting an application to another project or cluster in the Additional resources section. 4.3. Interacting with applications and components In the Topology view in the Developer perspective of the web console, the Graph view provides the following options to interact with applications and components: Click Open URL ( ) to see your application exposed by the route on a public URL. Click Edit Source code to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. Hover your cursor over the lower left icon on the pod to see the name of the latest build and its status. The status of the application build is indicated as New ( ), Pending ( ), Running ( ), Completed ( ), Failed ( ), and Canceled ( ). The status or phase of the pod is indicated by different colors and tooltips as: Running ( ): The pod is bound to a node and all of the containers are created. At least one container is still running or is in the process of starting or restarting. Not Ready ( ): The pods which are running multiple containers, not all containers are ready. Warning ( ): Containers in pods are being terminated, however termination did not succeed. Some containers may be other states. Failed ( ): All containers in the pod terminated but least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system. Pending ( ): The pod is accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a pod spends waiting to be scheduled as well as the time spent downloading container images over the network. Succeeded ( ): All containers in the pod terminated successfully and will not be restarted. Terminating ( ): When a pod is being deleted, it is shown as Terminating by some kubectl commands. Terminating status is not one of the pod phases. A pod is granted a graceful termination period, which defaults to 30 seconds. Unknown ( ): The state of the pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the pod should be running. After you create an application and an image is deployed, the status is shown as Pending . After the application is built, it is displayed as Running . Figure 4.1. Application topology The application resource name is appended with indicators for the different types of resource objects as follows: CJ : CronJob D : Deployment DC : DeploymentConfig DS : DaemonSet J : Job P : Pod SS : StatefulSet (Knative): A serverless application Note Serverless applications take some time to load and display on the Graph view . When you deploy a serverless application, it first creates a service resource and then a revision. After that, it is deployed and displayed on the Graph view . If it is the only workload, you might be redirected to the Add page. After the revision is deployed, the serverless application is displayed on the Graph view . 4.4. Scaling application pods and checking builds and routes The Topology view provides the details of the deployed components in the Overview panel. You can use the Overview and Details tabs to scale the application pods, check build status, services, and routes as follows: Click on the component node to see the Overview panel to the right. Use the Details tab to: Scale your pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic. Check the Labels , Annotations , and Status of the application. Click the Resources tab to: See the list of all the pods, view their status, access logs, and click on the pod to see the pod details. See the builds, their status, access logs, and start a new build if needed. See the services and routes used by the component. For serverless applications, the Resources tab provides information on the revision, routes, and the configurations used for that component. 4.5. Adding components to an existing project You can add components to a project. Procedure Navigate to the +Add view. Click Add to Project ( ) to left navigation pane or press Ctrl + Space Search for the component and click the Start / Create / Install button or click Enter to add the component to the project and see it in the topology Graph view . Figure 4.2. Adding component via quick search Alternatively, you can also use the available options in the context menu, such as Import from Git , Container Image , Database , From Catalog , Operator Backed , Helm Charts , Samples , or Upload JAR file , by right-clicking in the topology Graph view to add a component to your project. Figure 4.3. Context menu to add services 4.6. Grouping multiple components within an application You can use the +Add view to add multiple components or services to your project and use the topology Graph view to group applications and resources within an application group. Prerequisites You have created and deployed minimum two or more components on OpenShift Container Platform using the Developer perspective. Procedure To add a service to the existing application group, press Shift + drag it to the existing application group. Dragging a component and adding it to an application group adds the required labels to the component. Figure 4.4. Application grouping Alternatively, you can also add the component to an application as follows: Click the service pod to see the Overview panel to the right. Click the Actions drop-down menu and select Edit Application Grouping . In the Edit Application Grouping dialog box, click the Application drop-down list, and select an appropriate application group. Click Save to add the service to the application group. You can remove a component from an application group by selecting the component and using Shift + drag to drag it out of the application group. 4.7. Adding services to your application To add a service to your application use the +Add actions using the context menu in the topology Graph view . Note In addition to the context menu, you can add services by using the sidebar or hovering and dragging the dangling arrow from the application group. Procedure Right-click an application group in the topology Graph view to display the context menu. Figure 4.5. Add resource context menu Use Add to Application to select a method for adding a service to the application group, such as From Git , Container Image , From Dockerfile , From Devfile , Upload JAR file , Event Source , Channel , or Broker . Complete the form for the method you choose and click Create . For example, to add a service based on the source code in your Git repository, choose the From Git method, fill in the Import from Git form, and click Create . 4.8. Removing services from your application In the topology Graph view remove a service from your application using the context menu. Procedure Right-click on a service in an application group in the topology Graph view to display the context menu. Select Delete Deployment to delete the service. Figure 4.6. Deleting deployment option 4.9. Labels and annotations used for the Topology view The Topology view uses the following labels and annotations: Icon displayed in the node Icons in the node are defined by looking for matching icons using the app.openshift.io/runtime label, followed by the app.kubernetes.io/name label. This matching is done using a predefined set of icons. Link to the source code editor or the source The app.openshift.io/vcs-uri annotation is used to create links to the source code editor. Node Connector The app.openshift.io/connects-to annotation is used to connect the nodes. App grouping The app.kubernetes.io/part-of=<appname> label is used to group the applications, services, and components. For detailed information on the labels and annotations OpenShift Container Platform applications must use, see Guidelines for labels and annotations for OpenShift applications . 4.10. Additional resources See Importing a codebase from Git to create an application for more information on creating an application from Git. See Exporting applications .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/building_applications/odc-viewing-application-composition-using-topology-view
5.3. Configure CXF for a Web Service Data Source: WS-Security
5.3. Configure CXF for a Web Service Data Source: WS-Security Prerequisites The web service data source must be configured and the ConfigFile and EndPointName properties must be configured for CXF. Procedure 5.2. Configure CXF for a Web Service Data Source: WS-Security Specify the CXF SecurityType Run the following command from within the Management CLI, using WSSecurity as the value for SecurityType : Modify the CXF Configuration File Open the CXF configuration file for the web service data source and add your desired properties. The following is an example of a web service data source CXF configuration file adding a timestamp to the SOAP header: Note A WSDL is not expected to describe the service being used. The Spring XML configuration file must contain the relevant policy configuration. The client port configuration is matched to the data source instance by the CONFIG-NAME . The configuration may contain other port configurations with different local names. References For more information about WS-Security and CXF configuration options refer to http://cxf.apache.org/docs/ws-security.html .
[ "/subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=SecurityType:add(value=WSSecurity)", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/jaxws\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd\"> <jaxws:client name=\"{http://teiid.org}.teiid\" createdFromAPI=\"true\"> <jaxws:outInterceptors> <ref bean=\"Timestamp_Request\"/> </jaxws:outInterceptors> </jaxws:client> <bean id=\"Timestamp_Request\"> <constructor-arg> <map> <entry key=\"action\" value=\"Timestamp\"/> </map> </constructor-arg> </bean> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/configure_cxf_for_a_web_service_data_source_ws-security1
7.2. Common Replication Scenarios
7.2. Common Replication Scenarios Decide how the updates flow from server to server and how the servers interact when propagating updates. There are the four basic scenarios and a few strategies for deciding the method appropriate for the environment. These basic scenarios can be combined to build the replication topology that best suits the network environment. Section 7.2.1, "Single-Supplier Replication" Section 7.2.2, "Multi-Supplier Replication" Section 7.2.3, "Cascading Replication" Section 7.2.4, "Mixed Environments" 7.2.1. Single-Supplier Replication In the most basic replication configuration, a supplier server copies a replica directly to one or more consumer servers. In this configuration, all directory modifications occur on the read-write replica on the supplier server, and the consumer servers contain read-only replicas of the data. The supplier server must perform all modifications to the read-write replicas stored on the consumer servers. This is illustrated below. Figure 7.1. Single-Supplier Replication The supplier server can replicate a read-write replica to several consumer servers. The total number of consumer servers that a single supplier server can manage depends on the speed of the networks and the total number of entries that are modified on a daily basis. However, a supplier server is capable of maintaining several consumer servers. 7.2.2. Multi-Supplier Replication In a multi-supplier replication environment, main copies of the same information can exist on multiple servers. This means that data can be updated simultaneously in different locations. The changes that occur on each server are replicated to the other servers. This means that each server functions as both a supplier and a consumer. Note Red Hat Directory Server supports a maximum of 20 supplier servers in any replication environment, as well as an unlimited number of hub suppliers. The number of consumer servers that hold the read-only replicas is unlimited. When the same data is modified on multiple servers, there is a conflict resolution procedure to determine which change is kept. The Directory Server considers the valid change to be the most recent one. Multiple servers can have main copies of the same data, but, within the scope of a single replication agreement, there is only one supplier server and one consumer. Consequently, to create a multi-supplier environment between two supplier servers that share responsibility for the same data, create more than one replication agreement. Figure 7.2. Simplified Multi-Supplier Replication Configuration In Figure 7.2, "Simplified Multi-Supplier Replication Configuration" , supplier A and supplier B each hold a read-write replica of the same data. Figure 7.3, "Replication Traffic in a Simple Multi-Supplier Environment" illustrates the replication traffic with two suppliers (read-write replicas in the illustration), and two consumers (read-only replicas in the illustration). The consumers can be updated by both suppliers. The supplier servers ensure that the changes do not collide. Figure 7.3. Replication Traffic in a Simple Multi-Supplier Environment Replication in Directory Server can support as many as 20 suppliers, which all share responsibility for the same data. Using that many suppliers requires creating a range of replication agreements. (Also remember that in multi-supplier replication, each of the suppliers can be configured in different topologies - meaning there can be 20 different directory trees and even schema differences. There are many variables that have a direct impact on the topology selection.) In multi-supplier replication, the suppliers can send updates to all other suppliers or to some subset of other suppliers. Sending updates to all other suppliers means that changes are propogated faster and the overall scenario has much better failure-tolerance. However, it also increases the complexity of configuring suppliers and introduces high network demand and high server demand. Sending updates to a subset of suppliers is much simpler to configure and reduces the network and server loads, but there is a risk that data could be lost if there were multiple server failures. Figure 7.4, "Multi-Supplier Replication Configuration A" illustrates a fully connected mesh topology where four supplier servers feed data to the other three supplier servers (which also function as consumers). A total of twelve replication agreements exist between the four supplier servers. Figure 7.4. Multi-Supplier Replication Configuration A Figure 7.5, "Multi-Supplier Replication Configuration B" illustrates a topology where each supplier server feeds data to two other supplier servers (which also function as consumers). Only eight replication agreements exist between the four supplier servers, compared to the twelve agreements shown for the topology in Figure 7.4, "Multi-Supplier Replication Configuration A" . This topology is beneficial where the possibility of two or more servers failing at the same time is negligible. Figure 7.5. Multi-Supplier Replication Configuration B Those two examples are simplified multi-supplier scenarios. Since Red Hat Directory Server can have as many as 20 suppliers and an unlimited number of hub suppliers in a single multi-supplier environment, the replication topology can become much more complex. For example, Figure 7.4, "Multi-Supplier Replication Configuration A" has 12 replication agreements (four suppliers with three agreements each). If there are 20 suppliers, then there are 380 replication agreements (20 servers with 19 agreements each). WHen planning multi-supplier replication, consider: How many suppliers there will be What their geographic locations are The path the suppliers will use to update servers in other locations The topologies, directory trees, and schemas of the different suppliers The network quality The server load and performance The update interval required for directory data 7.2.3. Cascading Replication In a cascading replication scenario, a hub supplier receives updates from a supplier server and replays those updates on consumer servers. The hub supplier is a hybrid; it holds a read-only replica, like a typical consumer server, and it also maintains a changelog like a typical supplier server. Hub suppliers forward supplier data as they receive it from the original suppliers. Similarly, when a hub supplier receives an update request from a directory client, it refers the client to the supplier server. Cascading replication is useful if some of the network connections between various locations in the organization are better than others. For example, Example Corp. keeps the main copy of its directory data in Minneapolis, and the consumer servers in New York and Chicago. The network connection between Minneapolis and New York is very good, but the connection between Minneapolis and Chicago is poor. Since the network between New York and Chicago is fair, Example administrators use cascading replication to move directory data from Minneapolis to New York to Chicago. Figure 7.6. Cascading Replication Scenario Figure 7.7, "Replication Traffic and Changelogs in Cascading Replication" illustrates the same scenario from a different perspective, which shows how the replicas are configured on each server (read-write or read-only), and which servers maintain a changelog. Figure 7.7. Replication Traffic and Changelogs in Cascading Replication 7.2.4. Mixed Environments Any of the replication scenarios can be combined to suit the needs of the network and directory environment. One common combination is to use a multi-supplier configuration with a cascading configuration. Figure 7.8. Combined Multi-Supplier and Cascading Replication
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_replication_process-common_replication_scenarios
8.3.3. Huge Pages and Transparent Huge Pages (THP)
8.3.3. Huge Pages and Transparent Huge Pages (THP) x86 CPUs usually address memory in 4kB pages, but they are capable of using larger 2 MB or 1 GB pages known as huge pages . KVM guests can be deployed with huge page memory support in order to improve performance by increasing CPU cache hits against the Transaction Lookaside Buffer (TLB). A kernel feature enabled by default in Red Hat Enterprise Linux 6, huge pages can significantly increase performance, particularly for large memory and memory-intensive workloads. Red Hat Enterprise Linux 6 is able to more effectively manage large amounts of memory by increasing the page size through the use of huge pages. Red Hat Enterprise Linux 6.7 systems support both 2 MB and 1 GB huge pages, which can be allocated at boot or at runtime. See Section 8.3.3.3, "Enabling 1 GB huge pages for guests at boot or runtime" for instructions on enabling multiple huge page sizes. 8.3.3.1. Configuring Transparent Huge Pages Transparent huge pages (THP) automatically optimize system settings for performance. By allowing all free memory to be used as cache, performance is increased. As KSM can reduce the occurence of transparent huge pages, you may want to disable it before enabling THP. If you want to disable KSM, refer to Section 8.4.4, "Deactivating KSM" . Transparent huge pages are enabled by default. To check the current status, run: To enable transparent huge pages to be used by default, run: This will set /sys/kernel/mm/transparent_hugepage/enabled to always . To disable transparent huge pages: Transparent Huge Page support does not prevent the use of static huge pages. However, when static huge pages are not used, KVM will use transparent huge pages instead of the regular 4kB page size.
[ "cat /sys/kernel/mm/transparent_hugepage/enabled", "echo always > /sys/kernel/mm/transparent_hugepage/enabled", "echo never > /sys/kernel/mm/transparent_hugepage/enabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-memory-huge_pages
4.2. Keepalived Direct Routing Configuration
4.2. Keepalived Direct Routing Configuration Direct Routing configuration of Keepalived is similar in configuration with NAT. In the following example, Keepalived is configured to provide load balancing for a group of real servers running HTTP on port 80. To configure Direct Routing, change the lb_kind parameter to DR . Other configuration options are discussed in Section 4.1, "A Basic Keepalived configuration" . The following example shows the keepalived.conf file for the active server in a Keepalived configuration that uses direct routing. The following example shows the keepalived.conf file for the backup server in a Keepalived configuration that uses direct routing. Note that the state and priority values differ from the keepalived.conf file in the active server.
[ "global_defs { notification_email { [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 60 } vrrp_instance RH_1 { state MASTER interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 172.31.0.1 } } virtual_server 172.31.0.1 80 { delay_loop 10 lb_algo rr lb_kind DR persistence_timeout 9600 protocol TCP real_server 192.168.0.1 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.2 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.3 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } }", "global_defs { notification_email { [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 60 } vrrp_instance RH_1 { state BACKUP interface eth0 virtual_router_id 50 priority 99 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 172.31.0.1 } } virtual_server 172.31.0.1 80 { delay_loop 10 lb_algo rr lb_kind DR persistence_timeout 9600 protocol TCP real_server 192.168.0.1 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.2 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } real_server 192.168.0.3 80 { weight 1 TCP_CHECK { connect_timeout 10 connect_port 80 } } }" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-initial-setup-conf-DR-VSA
Chapter 1. Getting Started with Infinispan Query
Chapter 1. Getting Started with Infinispan Query 1.1. Introduction The Red Hat JBoss Data Grid Library mode Querying API enables you to search for entries in the grid using values instead of keys. It provides features such as: Keyword, Range, Fuzzy, Wildcard, and Phrase queries Combining queries Sorting, filtering, and pagination of query results This API, which is based on Apache Lucene and Hibernate Search, is supported in JBoss Data Grid. Additionally, JBoss Data Grid provides an alternate mechanism that allows direct indexless searching. See Chapter 6, The Infinispan Query DSL for details. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/chap-getting_started_with_infinispan_query
Chapter 15. IngressController [operator.openshift.io/v1]
Chapter 15. IngressController [operator.openshift.io/v1] Description IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the IngressController. status object status is the most recently observed status of the IngressController. 15.1.1. .spec Description spec is the specification of the desired behavior of the IngressController. Type object Property Type Description clientTLS object clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. defaultCertificate object defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. domain string domain is a DNS name serviced by the ingress controller and is used to configure multiple features: * For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy. * When using a generated default certificate, the certificate will be valid for domain and its subdomains. See defaultCertificate. * The value is published to individual Route statuses so that end-users know where to target external DNS records. domain must be unique among all IngressControllers, and cannot be updated. If empty, defaults to ingress.config.openshift.io/cluster .spec.domain. endpointPublishingStrategy object endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. httpCompression object httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. httpEmptyRequestsPolicy string httpEmptyRequestsPolicy describes how HTTP connections should be handled if the connection times out before a request is received. Allowed values for this field are "Respond" and "Ignore". If the field is set to "Respond", the ingress controller sends an HTTP 400 or 408 response, logs the connection (if access logging is enabled), and counts the connection in the appropriate metrics. If the field is set to "Ignore", the ingress controller closes the connection without sending a response, logging the connection, or incrementing metrics. The default value is "Respond". Typically, these connections come from load balancers' health probes or Web browsers' speculative connections ("preconnect") and can be safely ignored. However, these requests may also be caused by network errors, and so setting this field to "Ignore" may impede detection and diagnosis of problems. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. httpErrorCodePages object httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. httpHeaders object httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. logging object logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. namespaceSelector object namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. nodePlacement object nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. replicas integer replicas is the desired number of ingress controller replicas. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. The value of replicas is set based on the value of a chosen field in the Infrastructure CR. If defaultPlacement is set to ControlPlane, the chosen field will be controlPlaneTopology. If it is set to Workers the chosen field will be infrastructureTopology. Replicas will then be set to 1 or 2 based whether the chosen field's value is SingleReplica or HighlyAvailable, respectively. These defaults are subject to change. routeAdmission object routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. routeSelector object routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. tlsSecurityProfile object tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. tuningOptions object tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. unsupportedConfigOverrides `` unsupportedConfigOverrides allows specifying unsupported configuration options. Its use is unsupported. 15.1.2. .spec.clientTLS Description clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. Type object Required clientCA clientCertificatePolicy Property Type Description allowedSubjectPatterns array (string) allowedSubjectPatterns specifies a list of regular expressions that should be matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. If this list is empty, no filtering is performed. If the list is nonempty, then at least one pattern must match a client certificate's distinguished name or else the ingress controller rejects the certificate and denies the connection. clientCA object clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. clientCertificatePolicy string clientCertificatePolicy specifies whether the ingress controller requires clients to provide certificates. This field accepts the values "Required" or "Optional". Note that the ingress controller only checks client certificates for edge-terminated and reencrypt TLS routes; it cannot check certificates for cleartext HTTP or passthrough TLS routes. 15.1.3. .spec.clientTLS.clientCA Description clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.4. .spec.defaultCertificate Description defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 15.1.5. .spec.endpointPublishingStrategy Description endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.6. .spec.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.7. .spec.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.8. .spec.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.9. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.10. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. 15.1.11. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object 15.1.12. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.13. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.14. .spec.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.15. .spec.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.16. .spec.httpCompression Description httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. Type object Property Type Description mimeTypes array (string) mimeTypes is a list of MIME types that should have compression applied. This list can be empty, in which case the ingress controller does not apply compression. Note: Not all MIME types benefit from compression, but HAProxy will still use resources to try to compress if instructed to. Generally speaking, text (html, css, js, etc.) formats benefit from compression, but formats that are already compressed (image, audio, video, etc.) benefit little in exchange for the time and cpu spent on compressing again. See https://joehonton.medium.com/the-gzip-penalty-d31bd697f1a2 15.1.17. .spec.httpErrorCodePages Description httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.18. .spec.httpHeaders Description httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. Type object Property Type Description forwardedHeaderPolicy string forwardedHeaderPolicy specifies when and how the IngressController sets the Forwarded, X-Forwarded-For, X-Forwarded-Host, X-Forwarded-Port, X-Forwarded-Proto, and X-Forwarded-Proto-Version HTTP headers. The value may be one of the following: * "Append", which specifies that the IngressController appends the headers, preserving existing headers. * "Replace", which specifies that the IngressController sets the headers, replacing any existing Forwarded or X-Forwarded-* headers. * "IfNone", which specifies that the IngressController sets the headers if they are not already set. * "Never", which specifies that the IngressController never sets the headers, preserving any existing headers. By default, the policy is "Append". headerNameCaseAdjustments `` headerNameCaseAdjustments specifies case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying "X-Forwarded-For" indicates that the "x-forwarded-for" HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. uniqueId object uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. 15.1.19. .spec.httpHeaders.uniqueId Description uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. Type object Property Type Description format string format specifies the format for the injected HTTP header's value. This field has no effect unless name is specified. For the HAProxy-based ingress controller implementation, this format uses the same syntax as the HTTP log format. If the field is empty, the default value is "%{+X}o\\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid"; see the corresponding HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 name string name specifies the name of the HTTP header (for example, "unique-id") that the ingress controller should inject into HTTP requests. The field's value must be a valid HTTP header name as defined in RFC 2616 section 4.2. If the field is empty, no header is injected. 15.1.20. .spec.logging Description logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. Type object Property Type Description access object access describes how the client requests should be logged. If this field is empty, access logging is disabled. 15.1.21. .spec.logging.access Description access describes how the client requests should be logged. If this field is empty, access logging is disabled. Type object Required destination Property Type Description destination object destination is where access logs go. httpCaptureCookies `` httpCaptureCookies specifies HTTP cookies that should be captured in access logs. If this field is empty, no cookies are captured. httpCaptureHeaders object httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. httpLogFormat string httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 Note that this format only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). It does not affect the log format for TLS passthrough connections. logEmptyRequests string logEmptyRequests specifies how connections on which no request is received should be logged. Typically, these empty requests come from load balancers' health probes or Web browsers' speculative connections ("preconnect"), in which case logging these requests may be undesirable. However, these requests may also be caused by network errors, in which case logging empty requests may be useful for diagnosing the errors. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. Allowed values for this field are "Log" and "Ignore". The default value is "Log". 15.1.22. .spec.logging.access.destination Description destination is where access logs go. Type object Required type Property Type Description container object container holds parameters for the Container logging destination. Present only if type is Container. syslog object syslog holds parameters for a syslog endpoint. Present only if type is Syslog. type string type is the type of destination for logs. It must be one of the following: * Container The ingress operator configures the sidecar container named "logs" on the ingress controller pod and configures the ingress controller to write logs to the sidecar. The logs are then available as container logs. The expectation is that the administrator configures a custom logging solution that reads logs from this sidecar. Note that using container logs means that logs may be dropped if the rate of logs exceeds the container runtime's or the custom logging solution's capacity. * Syslog Logs are sent to a syslog endpoint. The administrator must specify an endpoint that can receive syslog messages. The expectation is that the administrator has configured a custom syslog instance. 15.1.23. .spec.logging.access.destination.container Description container holds parameters for the Container logging destination. Present only if type is Container. Type object 15.1.24. .spec.logging.access.destination.syslog Description syslog holds parameters for a syslog endpoint. Present only if type is Syslog. Type object Required address port Property Type Description address string address is the IP address of the syslog endpoint that receives log messages. facility string facility specifies the syslog facility of log messages. If this field is empty, the facility is "local1". maxLength integer maxLength is the maximum length of the syslog message If this field is empty, the maxLength is set to "1024". port integer port is the UDP port number of the syslog endpoint that receives log messages. 15.1.25. .spec.logging.access.httpCaptureHeaders Description httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. Type object Property Type Description request `` request specifies which HTTP request headers to capture. If this field is empty, no request headers are captured. response `` response specifies which HTTP response headers to capture. If this field is empty, no response headers are captured. 15.1.26. .spec.namespaceSelector Description namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.27. .spec.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.28. .spec.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.29. .spec.nodePlacement Description nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. Type object Property Type Description nodeSelector object nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. tolerations array tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 15.1.30. .spec.nodePlacement.nodeSelector Description nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.31. .spec.nodePlacement.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.32. .spec.nodePlacement.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.33. .spec.nodePlacement.tolerations Description tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Type array 15.1.34. .spec.nodePlacement.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 15.1.35. .spec.routeAdmission Description routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. Type object Property Type Description namespaceOwnership string namespaceOwnership describes how host name claims across namespaces should be handled. Value must be one of: - Strict: Do not allow routes in different namespaces to claim the same host. - InterNamespaceAllowed: Allow routes to claim different paths of the same host name across namespaces. If empty, the default is Strict. wildcardPolicy string wildcardPolicy describes how routes with wildcard policies should be handled for the ingress controller. WildcardPolicy controls use of routes [1] exposed by the ingress controller based on the route's wildcard policy. [1] https://github.com/openshift/api/blob/master/route/v1/types.go Note: Updating WildcardPolicy from WildcardsAllowed to WildcardsDisallowed will cause admitted routes with a wildcard policy of Subdomain to stop working. These routes must be updated to a wildcard policy of None to be readmitted by the ingress controller. WildcardPolicy supports WildcardsAllowed and WildcardsDisallowed values. If empty, defaults to "WildcardsDisallowed". 15.1.36. .spec.routeSelector Description routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.37. .spec.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.38. .spec.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.39. .spec.tlsSecurityProfile Description tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: TLSv1.3 NOTE: Currently unsupported. old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: TLSv1.0 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 15.1.40. .spec.tuningOptions Description tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. Type object Property Type Description clientFinTimeout string clientFinTimeout defines how long a connection will be held open while waiting for the client response to the server/backend closing the connection. If unset, the default timeout is 1s clientTimeout string clientTimeout defines how long a connection will be held open while waiting for a client response. If unset, the default timeout is 30s headerBufferBytes integer headerBufferBytes describes how much memory should be reserved (in bytes) for IngressController connection sessions. Note that this value must be at least 16384 if HTTP/2 is enabled for the IngressController ( https://tools.ietf.org/html/rfc7540 ). If this field is empty, the IngressController will use a default value of 32768 bytes. Setting this field is generally not recommended as headerBufferBytes values that are too small may break the IngressController and headerBufferBytes values that are too large could cause the IngressController to use significantly more memory than necessary. headerBufferMaxRewriteBytes integer headerBufferMaxRewriteBytes describes how much memory should be reserved (in bytes) from headerBufferBytes for HTTP header rewriting and appending for IngressController connection sessions. Note that incoming HTTP requests will be limited to (headerBufferBytes - headerBufferMaxRewriteBytes) bytes, meaning headerBufferBytes must be greater than headerBufferMaxRewriteBytes. If this field is empty, the IngressController will use a default value of 8192 bytes. Setting this field is generally not recommended as headerBufferMaxRewriteBytes values that are too small may break the IngressController and headerBufferMaxRewriteBytes values that are too large could cause the IngressController to use significantly more memory than necessary. healthCheckInterval string healthCheckInterval defines how long the router waits between two consecutive health checks on its configured backends. This value is applied globally as a default for all routes, but may be overridden per-route by the route annotation "router.openshift.io/haproxy.health.check.interval". Expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Setting this to less than 5s can cause excess traffic due to too frequent TCP health checks and accompanying SYN packet storms. Alternatively, setting this too high can result in increased latency, due to backend servers that are no longer available, but haven't yet been detected as such. An empty or zero healthCheckInterval means no opinion and IngressController chooses a default, which is subject to change over time. Currently the default healthCheckInterval value is 5s. Currently the minimum allowed value is 1s and the maximum allowed value is 2147483647ms (24.85 days). Both are subject to change over time. maxConnections integer maxConnections defines the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections but at the cost of additional system resources being consumed. Permitted values are: empty, 0, -1, and the range 2000-2000000. If this field is empty or 0, the IngressController will use the default value of 50000, but the default is subject to change in future releases. If the value is -1 then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. Selecting -1 (i.e., auto) will result in a large value being computed (~520000 on OpenShift >=4.10 clusters) and therefore each HAProxy process will incur significant memory usage compared to the current default of 50000. Setting a value that is greater than the current operating system limit will prevent the HAProxy process from starting. If you choose a discrete value (e.g., 750000) and the router pod is migrated to a new node, there's no guarantee that that new node has identical ulimits configured. In such a scenario the pod would fail to start. If you have nodes with different ulimits configured (e.g., different tuned profiles) and you choose a discrete value then the guidance is to use -1 and let the value be computed dynamically at runtime. You can monitor memory usage for router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}'. You can monitor memory usage of individual HAProxy processes in router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}/container_processes{container="router",namespace="openshift-ingress"}'. reloadInterval string reloadInterval defines the minimum interval at which the router is allowed to reload to accept new changes. Increasing this value can prevent the accumulation of HAProxy processes, depending on the scenario. Increasing this interval can also lessen load imbalance on a backend's servers when using the roundrobin balancing algorithm. Alternatively, decreasing this value may decrease latency since updates to HAProxy's configuration can take effect more quickly. The value must be a time duration value; see https://pkg.go.dev/time#ParseDuration . Currently, the minimum value allowed is 1s, and the maximum allowed value is 120s. Minimum and maximum allowed values may change in future versions of OpenShift. Note that if a duration outside of these bounds is provided, the value of reloadInterval will be capped/floored and not rejected (e.g. a duration of over 120s will be capped to 120s; the IngressController will not reject and replace this disallowed value with the default). A zero value for reloadInterval tells the IngressController to choose the default, which is currently 5s and subject to change without notice. This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Note: Setting a value significantly larger than the default of 5s can cause latency in observing updates to routes and their endpoints. HAProxy's configuration will be reloaded less frequently, and newly created routes will not be served until the subsequent reload. serverFinTimeout string serverFinTimeout defines how long a connection will be held open while waiting for the server/backend response to the client closing the connection. If unset, the default timeout is 1s serverTimeout string serverTimeout defines how long a connection will be held open while waiting for a server/backend response. If unset, the default timeout is 30s threadCount integer threadCount defines the number of threads created per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of more system resources being used. HAProxy currently supports up to 64 threads. If this field is empty, the IngressController will use the default value. The current default is 4 threads, but this may change in future releases. Setting this field is generally not recommended. Increasing the number of HAProxy threads allows ingress controller pods to utilize more CPU time under load, potentially starving other pods if set too high. Reducing the number of threads may cause the ingress controller to perform poorly. tlsInspectDelay string tlsInspectDelay defines how long the router can hold data to find a matching route. Setting this too short can cause the router to fall back to the default certificate for edge-terminated or reencrypt routes even when a better matching certificate could be used. If unset, the default inspect delay is 5s tunnelTimeout string tunnelTimeout defines how long a tunnel connection (including websockets) will be held open while the tunnel is idle. If unset, the default timeout is 1h 15.1.41. .status Description status is the most recently observed status of the IngressController. Type object Property Type Description availableReplicas integer availableReplicas is number of observed available replicas according to the ingress controller deployment. conditions array conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. conditions[] object OperatorCondition is just the standard condition fields. domain string domain is the actual domain in use. endpointPublishingStrategy object endpointPublishingStrategy is the actual strategy in use. namespaceSelector object namespaceSelector is the actual namespaceSelector in use. observedGeneration integer observedGeneration is the most recent generation observed. routeSelector object routeSelector is the actual routeSelector in use. selector string selector is a label selector, in string format, for ingress controller pods corresponding to the IngressController. The number of matching pods should equal the value of availableReplicas. tlsProfile object tlsProfile is the TLS connection configuration that is in effect. 15.1.42. .status.conditions Description conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. Type array 15.1.43. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 15.1.44. .status.endpointPublishingStrategy Description endpointPublishingStrategy is the actual strategy in use. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.45. .status.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.46. .status.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.47. .status.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.48. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.49. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. 15.1.50. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object 15.1.51. .status.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.52. .status.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.53. .status.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.54. .status.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.55. .status.namespaceSelector Description namespaceSelector is the actual namespaceSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.56. .status.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.57. .status.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.58. .status.routeSelector Description routeSelector is the actual routeSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.59. .status.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.60. .status.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.61. .status.tlsProfile Description tlsProfile is the TLS connection configuration that is in effect. Type object Property Type Description ciphers array (string) ciphers is used to specify the cipher algorithms that are negotiated during the TLS handshake. Operators may remove entries their operands do not support. For example, to use DES-CBC3-SHA (yaml): ciphers: - DES-CBC3-SHA minTLSVersion string minTLSVersion is used to specify the minimal version of the TLS protocol that is negotiated during the TLS handshake. For example, to use TLS versions 1.1, 1.2 and 1.3 (yaml): minTLSVersion: TLSv1.1 NOTE: currently the highest minTLSVersion allowed is VersionTLS12 15.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/ingresscontrollers GET : list objects of kind IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers DELETE : delete collection of IngressController GET : list objects of kind IngressController POST : create an IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} DELETE : delete an IngressController GET : read the specified IngressController PATCH : partially update the specified IngressController PUT : replace the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale GET : read scale of the specified IngressController PATCH : partially update scale of the specified IngressController PUT : replace scale of the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status GET : read status of the specified IngressController PATCH : partially update status of the specified IngressController PUT : replace status of the specified IngressController 15.2.1. /apis/operator.openshift.io/v1/ingresscontrollers Table 15.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind IngressController Table 15.2. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty 15.2.2. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers Table 15.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 15.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of IngressController Table 15.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IngressController Table 15.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.8. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressController Table 15.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.10. Body parameters Parameter Type Description body IngressController schema Table 15.11. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 202 - Accepted IngressController schema 401 - Unauthorized Empty 15.2.3. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} Table 15.12. Global path parameters Parameter Type Description name string name of the IngressController namespace string object name and auth scope, such as for teams and projects Table 15.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an IngressController Table 15.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.15. Body parameters Parameter Type Description body DeleteOptions schema Table 15.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressController Table 15.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.18. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressController Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.20. Body parameters Parameter Type Description body Patch schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressController Table 15.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.23. Body parameters Parameter Type Description body IngressController schema Table 15.24. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty 15.2.4. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale Table 15.25. Global path parameters Parameter Type Description name string name of the IngressController namespace string object name and auth scope, such as for teams and projects Table 15.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified IngressController Table 15.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified IngressController Table 15.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.30. Body parameters Parameter Type Description body Patch schema Table 15.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified IngressController Table 15.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.33. Body parameters Parameter Type Description body Scale schema Table 15.34. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 15.2.5. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status Table 15.35. Global path parameters Parameter Type Description name string name of the IngressController namespace string object name and auth scope, such as for teams and projects Table 15.36. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified IngressController Table 15.37. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.38. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified IngressController Table 15.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.40. Body parameters Parameter Type Description body Patch schema Table 15.41. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified IngressController Table 15.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.43. Body parameters Parameter Type Description body IngressController schema Table 15.44. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/ingresscontroller-operator-openshift-io-v1
5.187. mcelog
5.187. mcelog 5.187.1. RHBA-2012:0779 - mcelog bug fix and enhancement update Updated mcelog packages that fix three bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The mcelog packages provide the mcelog daemon to collect and decode Machine Check Exception data on AMD64 and Intel 64 platforms. Bug Fixes BZ# 728265 Prior to this update, the mcelog README file contained references to nonexistent directories. This update removes these references and updates the file. BZ# 769363 Prior to this update, the mcelog daemon wrongly displayed an error that a certain microarchitecture was not supported even if the CPU was supported if mcelog was run on Intel CPUs only with architectural decoding enabled. This update removes this message. BZ# 784091 Prior to this update, a cron job tried to install regardless whether a system was supported or not. As a result, the mcelog daemon displayed the message "No such device" if mcelog was installed on unsupported systems. This update prevents the cron job from installing on unsupported processors. Enhancements BZ# 746785 Prior to this update, the mcelog daemon displayed the error "mcelog read: No such device" when running the unsupported AMD Family 16 microarchitecture or higher. This update adds a check to mcelog to determine what AMD processor family is used. If needed, the new message "CPU is unsupported" is displayed. BZ# 795508 Prior to this update, The cron file for mcelog did not use the "--supported" option. As a consequence, the "--supported" option did not correctly check whether the mcelog daemon worked. This update adds the "--supported" option to the crontab file and removes two redundant strings. All users of mcelog are advised to upgrade to these updated mcelog packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mcelog
8.243. trace-cmd
8.243. trace-cmd 8.243.1. RHBA-2014:1559 - trace-cmd bug fix update Updated trace-cmd packages that fix one bug are now available for Red Hat Enterprise Linux 6. The trace-cmd packages contain a command-line tool that interfaces with the ftrace utility in the kernel. Bug Fix BZ# 879814 Due to invalid pointers, executing the "trace-cmd split" or "trace-cmd report" commands after running latency tracers failed with a segmentation fault. With this update, additional checks have been added to ensure that the pointers are properly initialized before attempting to use them. As a result, the segmentation fault no longer occurs in the described scenario. Users of trace-cmd are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/trace-cmd
Chapter 1. Clair security scanner
Chapter 1. Clair security scanner Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. 1.1. About Clair Clair uses Common Vulnerability Scoring System (CVSS) data from the National Vulnerability Database (NVD) to enrich vulnerability data, which is a United States government repository of security-related information, including known vulnerabilities and security issues in various software components and systems. Using scores from the NVD provides Clair the following benefits: Data synchronization . Clair can periodically synchronize its vulnerability database with the NVD. This ensures that it has the latest vulnerability data. Matching and enrichment . Clair compares the metadata and identifiers of vulnerabilities it discovers in container images with the data from the NVD. This process involves matching the unique identifiers, such as Common Vulnerabilities and Exposures (CVE) IDs, to the entries in the NVD. When a match is found, Clair can enrich its vulnerability information with additional details from NVD, such as severity scores, descriptions, and references. Severity Scores . The NVD assigns severity scores to vulnerabilities, such as the Common Vulnerability Scoring System (CVSS) score, to indicate the potential impact and risk associated with each vulnerability. By incorporating NVD's severity scores, Clair can provide more context on the seriousness of the vulnerabilities it detects. If Clair finds vulnerabilities from NVD, a detailed and standardized assessment of the severity and potential impact of vulnerabilities detected within container images is reported to users on the UI. CVSS enrichment data provides Clair the following benefits: Vulnerability prioritization . By utilizing CVSS scores, users can prioritize vulnerabilities based on their severity, helping them address the most critical issues first. Assess Risk . CVSS scores can help Clair users understand the potential risk a vulnerability poses to their containerized applications. Communicate Severity . CVSS scores provide Clair users a standardized way to communicate the severity of vulnerabilities across teams and organizations. Inform Remediation Strategies . CVSS enrichment data can guide Quay.io users in developing appropriate remediation strategies. Compliance and Reporting . Integrating CVSS data into reports generated by Clair can help organizations demonstrate their commitment to addressing security vulnerabilities and complying with industry standards and regulations. 1.1.1. Clair releases New versions of Clair are regularly released. The source code needed to build Clair is packaged as an archive and attached to each release. Clair releases can be found at Clair releases . Release artifacts also include the clairctl command line interface tool, which obtains updater data from the internet by using an open host. Clair 4.7.4 Clair 4.7.4 was released on 2024-05-01. The following changes have been made: The default layer download location has changed. For more information, see Disk usage considerations . Clair 4.7.3 Clair 4.7.3 was released on 2024-02-26. The following changes have been made: The minimum TLS version for Clair is now 1.2. Previously, servers allowed for 1.1 connections. Clair 4.7.2 Clair 4.7.2 was released on 2023-10-09. The following changes have been made: CRDA support has been removed. Clair 4.7.1 Clair 4.7.1 was released as part of Red Hat Quay 3.9.1. The following changes have been made: With this release, you can view unpatched vulnerabilities from Red Hat Enterprise Linux (RHEL) sources. If you want to view unpatched vulnerabilities, you can the set ignore_unpatched parameter to false . For example: updaters: config: rhel: ignore_unpatched: false To disable this feature, you can set ignore_unpatched to true . Clair 4.7 Clair 4.7 was released as part of Red Hat Quay 3.9, and includes support for the following features: Native support for indexing Golang modules and RubeGems in container images. Change to OSV.dev as the vulnerability database source for any programming language package managers. This includes popular sources like GitHub Security Advisories or PyPA. This allows offline capability. Use of pyup.io for Python and CRDA for Java is suspended. Clair now supports Java, Golang, Python, and Ruby dependencies. 1.1.2. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database 1.1.3. Clair supported dependencies Clair supports identifying and managing the following dependencies: Java Golang Python Ruby This means that it can analyze and report on the third-party libraries and packages that a project in these languages relies on to work correctly. When an image that contains packages from a language unsupported by Clair is pushed to your repository, a vulnerability scan cannot be performed on those packages. Users do not receive an analysis or security report for unsupported dependencies or packages. As a result, the following consequences should be considered: Security risks . Dependencies or packages that are not scanned for vulnerability might pose security risks to your organization. Compliance issues . If your organization has specific security or compliance requirements, unscanned, or partially scanned, container images might result in non-compliance with certain regulations. Note Scanned images are indexed, and a vulnerability report is created, but it might omit data from certain unsupported languages. For example, if your container image contains a Lua application, feedback from Clair is not provided because Clair does not detect it. It can detect other languages used in the container image, and shows detected CVEs for those languages. As a result, Clair images are fully scanned based on what it supported by Clair. 1.1.4. Clair containers Official downstream Clair containers bundled with Red Hat Quay can be found on the Red Hat Ecosystem Catalog . Official upstream containers are packaged and released as a under the Clair project on Quay.io . The latest tag tracks the Git development branch. Version tags are built from the corresponding release. 1.2. Clair severity mapping Clair offers a comprehensive approach to vulnerability assessment and management. One of its essential features is the normalization of security databases' severity strings. This process streamlines the assessment of vulnerability severities by mapping them to a predefined set of values. Through this mapping, clients can efficiently react to vulnerability severities without the need to decipher the intricacies of each security database's unique severity strings. These mapped severity strings align with those found within the respective security databases, ensuring consistency and accuracy in vulnerability assessment. 1.2.1. Clair severity strings Clair alerts users with the following severity strings: Unknown Negligible Low Medium High Critical These severity strings are similar to the strings found within the relevant security database. Alpine mapping Alpine SecDB database does not provide severity information. All vulnerability severities will be Unknown. Alpine Severity Clair Severity * Unknown AWS mapping AWS UpdateInfo database provides severity information. AWS Severity Clair Severity low Low medium Medium important High critical Critical Debian mapping Debian Oval database provides severity information. Debian Severity Clair Severity * Unknown Unimportant Low Low Medium Medium High High Critical Oracle mapping Oracle Oval database provides severity information. Oracle Severity Clair Severity N/A Unknown LOW Low MODERATE Medium IMPORTANT High CRITICAL Critical RHEL mapping RHEL Oval database provides severity information. RHEL Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical SUSE mapping SUSE Oval database provides severity information. Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical Ubuntu mapping Ubuntu Oval database provides severity information. Severity Clair Severity Untriaged Unknown Negligible Negligible Low Low Medium Medium High High Critical Critical OSV mapping Table 1.1. CVSSv3 Base Score Clair Severity 0.0 Negligible 0.1-3.9 Low 4.0-6.9 Medium 7.0-8.9 High 9.0-10.0 Critical Table 1.2. CVSSv2 Base Score Clair Severity 0.0-3.9 Low 4.0-6.9 Medium 7.0-10 High
[ "updaters: config: rhel: ignore_unpatched: false" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-vulnerability-scanner
Operate
Operate Red Hat OpenShift Lightspeed 1.0tp1 Using OpenShift Lightspeed Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_lightspeed/1.0tp1/html/operate/index
probe::vm.write_shared
probe::vm.write_shared Name probe::vm.write_shared - Attempts at writing to a shared page. Synopsis Values name Name of the probe point address The address of the shared write. Context The context is the process attempting the write. Description Fires when a process attempts to write to a shared page. If a copy is necessary, this will be followed by a vm.write_shared_copy.
[ "vm.write_shared" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-vm-write-shared
2.2. Compatible Versions
2.2. Compatible Versions The product and package versions required to create a supported deployment of Red Hat Gluster Storage (RHGS) nodes managed by the specified version of Red Hat Virtualization (RHV) are documented in the following knowledge base article: https://access.redhat.com/articles/2356261 .
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/Compatible_Versions
Chapter 29. Tips for undercloud and overcloud services
Chapter 29. Tips for undercloud and overcloud services This section provides advice on tuning and managing specific OpenStack services on the undercloud. 29.1. Tuning deployment performance Red Hat OpenStack Platform director uses OpenStack Orchestration (heat) to conduct the main deployment and provisioning functions. Heat uses a series of workers to execute deployment tasks. To calculate the default number of workers, the director heat configuration halves the total CPU thread count of the undercloud. In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value. For example, if your undercloud has a CPU with 16 threads, heat spawns 8 workers by default. The director configuration also uses a minimum and maximum cap by default: Service Minimum Maximum OpenStack Orchestration (heat) 4 24 However, you can set the number of workers manually with the HeatWorkers parameter in an environment file: heat-workers.yaml undercloud.conf 29.2. Running swift-ring-builder in a container To manage your Object Storage (swift) rings, use the swift-ring-builder commands inside the server containers: swift_object_server swift_container_server swift_account_server For example, to view information about your swift object rings, run the following command: You can run this command on both the undercloud and overcloud nodes. 29.3. Changing the SSL/TLS cipher rules for HAProxy If you enabled SSL/TLS in the undercloud (see Section 4.2, "Director configuration parameters" ), you might want to harden the SSL/TLS ciphers and rules that are used with the HAProxy configuration. This hardening helps to avoid SSL/TLS vulnerabilities, such as the POODLE vulnerability . Set the following hieradata using the hieradata_override undercloud configuration option: tripleo::haproxy::ssl_cipher_suite The cipher suite to use in HAProxy. tripleo::haproxy::ssl_options The SSL/TLS rules to use in HAProxy. For example, you might want to use the following cipher and rules: Cipher: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS Rules: no-sslv3 no-tls-tickets Create a hieradata override file ( haproxy-hiera-overrides.yaml ) with the following content: Note The cipher collection is one continuous line. Set the hieradata_override parameter in the undercloud.conf file to use the hieradata override file you created before you ran openstack undercloud install :
[ "parameter_defaults: HeatWorkers: 16", "custom_env_files: heat-workers.yaml", "sudo podman exec -ti -u swift swift_object_server swift-ring-builder /etc/swift/object.builder", "tripleo::haproxy::ssl_cipher_suite: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS tripleo::haproxy::ssl_options: no-sslv3 no-tls-tickets", "[DEFAULT] hieradata_override = haproxy-hiera-overrides.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_tips-for-undercloud-and-overcloud-services
Chapter 23. Kernel
Chapter 23. Kernel The criu tool Red Hat Enterprise Linux 7.2 introduces the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. The criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, are also added to Red Hat Enterprise Linux 7.2 as a Technology Preview. User namespace This feature provides additional security to servers running Linux containers by providing better isolation between the host and the containers. Administrators of a container are no longer able to perform administrative operations on the host, which increases security. LPAR Watchdog for IBM System z An enhanced watchdog driver for IBM System z is available as a Technology Preview. This driver supports Linux logical partitions (LPAR) as well as Linux guests in the z/VM hypervisor, and provides automatic reboot and automatic dump capabilities if a Linux system becomes unresponsive. i40evf handles big resets The most common type of reset that the Virtual Function (VF) encounters is a Physical Function (PF) reset that cascades down into a VF reset for each VF. However, for 'bigger' resets, such as a Core or EMP reset, when the device is reinitialized, the VF previously did not get the same VSI, so the VF was not able to recover, as it continued to request resources for its original VSI. As a Technology Preview, this update adds an extra state to the admin queue state machine, so that the driver can re-request its configuration information at runtime. During reset recovery, this bit is set in the aq_required field, and the configuration information is fetched before attempting to bring the driver back up. Support for Intel(R) Omni-Path Architecture kernel driver Intel(R) Omni-Path Architecture (OPA) kernel driver, which is supported as a Technology Preview, provides Host Fabric Interconnect (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on how to obtain Intel(R) Omni-Path documentation, see https://access.redhat.com/articles/2039623 . Support for Diag0c on IBM System z As a Technology Preview, Red Hat Enterprise Linux 7.2 introduces support for the Diag0c feature on IBM System z. Diag0c support makes it possible to read the CPU performance metrics provided by the z/VM hypervisor, and allows obtaining the management time for each online CPU of a Linux guest where the diagnose task is executed. 10GbE RoCE Express feature for RDMA As a Technology Preview, Red Hat Enterprise Linux 7.2 includes the 10GbE RDMA over Converged Ethernet (RoCE) Express feature. This makes it possible to use Ethernet and Remote Direct Memory Access (RDMA), as well as the Direct Access Programming Library (DAPL) and OpenFabrics Enterprise Distribution (OFED) APIs, on IBM System z. Before using this feature on an IBM z13 system, ensure that the minimum required service is applied: z/VM APAR UM34525 and HW ycode N98778.057 (bundle 14). zEDC compression on IBM System z Red Hat Enterprise Linux 7.2 includes the Generic Workqueue (GenWQE) engine device driver as a Technology Preview. The initial task of the driver is to perform zlib-style compression and decompression of the RFC1950, RFC1951 and RFC1952 formats, but it can be adjusted to accelerate a variety of other tasks. Kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/technology-preview-kernel
Chapter 6. Installing a three-node cluster on vSphere
Chapter 6. Installing a three-node cluster on vSphere In OpenShift Container Platform version 4.17, you can install a three-node cluster on VMware vSphere. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 6.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: Configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. In a three-node cluster, the Ingress Controller pods run on the control plane nodes. For more information, see the "Load balancing requirements for user-provisioned infrastructure". After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on vSphere with user-provisioned infrastructure". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 6.2. steps Installing a cluster on vSphere with customizations Installing a cluster on vSphere with user-provisioned infrastructure
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_vmware_vsphere/installing-vsphere-three-node
Chapter 11. Enabling encryption on a vSphere cluster
Chapter 11. Enabling encryption on a vSphere cluster You can encrypt your virtual machines after installing OpenShift Container Platform 4.18 on vSphere by draining and shutting down your nodes one at a time. While each virtual machine is shutdown, you can enable encryption in the vCenter web interface. 11.1. Encrypting virtual machines You can encrypt your virtual machines with the following process. You can drain your virtual machines, power them down and encrypt them using the vCenter interface. Finally, you can create a storage class to use the encrypted storage. Prerequisites You have configured a Standard key provider in vSphere. For more information, see Adding a KMS to vCenter Server . Important The Native key provider in vCenter is not supported. For more information, see vSphere Native Key Provider Overview . You have enabled host encryption mode on all of the ESXi hosts that are hosting the cluster. For more information, see Enabling host encryption mode . You have a vSphere account which has all cryptographic privileges enabled. For more information, see Cryptographic Operations Privileges . Procedure Drain and cordon one of your nodes. For detailed instructions on node management, see "Working with Nodes". Shutdown the virtual machine associated with that node in the vCenter interface. Right-click on the virtual machine in the vCenter interface and select VM Policies Edit VM Storage Policies . Select an encrypted storage policy and select OK . Start the encrypted virtual machine in the vCenter interface. Repeat steps 1-5 for all nodes that you want to encrypt. Configure a storage class that uses the encrypted storage policy. For more information about configuring an encrypted storage class, see "VMware vSphere CSI Driver Operator". 11.2. Additional resources Working with nodes vSphere encryption Requirements for encrypting virtual machines
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_vmware_vsphere/vsphere-post-installation-encryption
Appendix B. Preparing a Local Manually Configured PostgreSQL Database
Appendix B. Preparing a Local Manually Configured PostgreSQL Database Use this procedure to set up the Manager database or Data Warehouse database with custom values. Set up this database before you configure the Manager; you must supply the database credentials during engine-setup . Note The engine-setup and engine-backup --mode=restore commands only support system error messages in the en_US.UTF8 locale, even if the system locale is different. The locale settings in the postgresql.conf file must be set to en_US.UTF8 . Important The database name must contain only numbers, underscores, and lowercase letters. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Initializing the PostgreSQL Database Install the PostgreSQL server package: Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot: Connect to the psql command line interface as the postgres user: Create a default user. The Manager's default user is engine and Data Warehouse's default user is ovirt_engine_history : Create a database. The Manager's default database name is engine and Data Warehouse's default database name is ovirt_engine_history : Connect to the new database: Add the uuid-ossp extension: Add the plpgsql language if it does not exist: Quit the psql interface: Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager or the Data Warehouse machine, and 0-32 or 0-128 with the CIDR mask length: Update the PostgreSQL server's configuration. Edit the /var/opt/rh/rh-postgresql10/lib/pgsql/data/postgresql.conf file and add the following lines to the bottom of the file: Restart the postgresql service: Optionally, set up SSL to secure database connections using the instructions at http://www.postgresql.org/docs/10/static/ssl-tcp.html#SSL-FILE-USAGE . Return to Section 3.3, "Installing and Configuring the Red Hat Virtualization Manager" , and answer Local and Manual when asked about the database.
[ "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "yum repolist", "subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms", "yum install rh-postgresql10 rh-postgresql10-postgresql-contrib", "scl enable rh-postgresql10 -- postgresql-setup --initdb systemctl enable rh-postgresql10-postgresql systemctl start rh-postgresql10-postgresql", "su - postgres -c 'scl enable rh-postgresql10 -- psql'", "postgres=# create role user_name with login encrypted password ' password ';", "postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';", "postgres=# \\c database_name", "database_name =# CREATE EXTENSION \"uuid-ossp\";", "database_name =# CREATE LANGUAGE plpgsql;", "database_name =# \\q", "host database_name user_name X.X.X.X/0-32 md5 host database_name user_name X.X.X.X::/0-128 md5", "autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192", "systemctl restart rh-postgresql10-postgresql" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Preparing_a_Local_Manually-Configured_PostgreSQL_Database_SM_localDB_deploy
Chapter 5. Running .NET 8.0 applications in containers
Chapter 5. Running .NET 8.0 applications in containers Use the ubi8/dotnet-80-runtime image to run a .NET application inside a Linux container. The following example uses podman. Procedure Create a new MVC project in a directory called mvc_runtime_example : Publish the project: Run your image: Verification steps View the application running in the container:
[ "dotnet new mvc --output mvc_runtime_example", "dotnet publish mvc_runtime_example -f net8.0 /p:PublishProfile=DefaultContainer /p:ContainerBaseImage=registry.access.redhat.com/ubi8/dotnet-80-runtime:latest", "podman run --rm -p8080:8080 mvc_runtime_example", "xdg-open http://127.0.0.1:8080" ]
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_9/running-apps-in-containers-using-dotnet_getting-started-with-dotnet-on-rhel-9
Chapter 2. Managing collections in automation hub
Chapter 2. Managing collections in automation hub As a content creator, you can use namespaces in automation hub to curate and manage collections for the following purposes: Create groups with permissions to curate namespaces and upload collections to private automation hub Add information and resources to the namespace to help end users of the collection in their automation tasks Upload collections to the namespace Review the namespace import logs to determine the success or failure of uploading the collection and its current approval status For information on creating content, see the Red Hat Ansible Automation Platform Creator Guide . 2.1. Using namespaces to manage collections in automation hub Namespaces are unique locations in automation hub to which you can upload and publish content collections. Access to namespaces in automation hub is governed by groups with permission to manage the content and related information that appears there. You can use namespaces in automation hub to organize collections developed within your organization for internal distribution and use. If you are working with namespaces, you must have a group that has permissions to create, edit and upload collections to namespaces. Collections uploaded to a namespace require administrative approval before you can publish them and make them available for use. 2.1.1. Creating a new group for content curators You can create a new group in private automation hub designed to support content curation in your organization. This group can contribute internally developed collections for publication in private automation hub. To help content developers create a namespace and upload their internally developed collections to private automation hub, you must first create and edit a group and assign the required permissions. Prerequisites You have administrative permissions in private automation hub and can create groups. Procedure Log in to your private automation hub. From the navigation panel, select User Access Groups and click Create . Enter Content Engineering as a Name for the group in the modal and click Create . You have created the new group and the Groups page opens. On the Permissions tab, click Edit . Under Namespaces , add permissions for Add Namespace , Upload to Namespace , and Change Namespace . Click Save . The new group is created with the permissions that you assigned. You can then add users to the group. Click the Users tab on the Groups page. Click Add . Select users and click Add . 2.1.2. Creating a namespace You can create a namespace to organize collections that your content developers upload to automation hub. When creating a namespace, you can assign a group in automation hub as owners of that namespace. Prerequisites You have Add Namespaces and Upload to Namespaces permissions. Procedure Log in to your private automation hub. From the navigation panel, select Collections Namespaces . Click Create and enter a namespace name . Assign a group of Namespace owners . Click Create . Your content developers can now upload collections to your new namespace and allow users in groups assigned as owners to upload collections. 2.1.3. Adding additional information and resources to a namespace You can add information and provide resources for your users to accompany collections included in the namespace. Add a logo and a description, and link users to your GitHub repository, issue tracker, or other online assets. You can also enter markdown text in the Edit resources tab to include more information. This is helpful to users who use your collection in their automation tasks. Prerequisites You have Change Namespaces permissions. Procedure Log in to your private automation hub. From the navigation panel, select Collections Namespaces . Click the More Actions icon ... and select Edit namespace . In the Edit details tab, enter information in the fields. Click the Edit resources tab to enter markdown in the text field. Click Save . Your content developers can now upload collections to your new namespace, or allow users in groups assigned as owners to upload collections. When you create a namespace, groups with permissions to upload to it can start adding their collections for approval. Collections in the namespace appear in the Published repository after approval. 2.1.4. Uploading collections to your namespaces You can upload internally developed collections in tar.gz file format to your private automation hub namespace for review and approval by an automation hub administrator. When approved, the collection moves to the Published content repository where automation hub users can view and download it. Note Format your collection file name as follows: <my_namespace-my_collection-1.0.0.tar.gz> Prerequisites You have a namespace to which you can upload the collection. Procedure Log in to your private automation hub. From the navigation panel, select Collections Namespaces and select a namespace. Click Upload collection . Click Select file from the New collection dialog. Select the collection to upload. Click Upload . The My Imports screen shows a summary of tests and notifies you if the collection uploaded successfully or if it failed. 2.1.5. Reviewing your namespace import logs You can review the status of collections uploaded to your namespaces to evaluate success or failure of the process. Imported collections information includes: Status completed or failed Approval status waiting for approval or approved Version the version of the uploaded collection Import log activities executed during the collection import Prerequisites You have access to a namespace to which you can upload collections. Procedure Log in to your private automation hub. From the navigation panel, select Collections Namespaces . Select a namespace. Click the More Actions icon ... and select My imports . Use the search field or locate an imported collection from the list. Click the imported collection. Review collection import details to determine the status of the collection in your namespace. 2.1.6. Deleting a namespace You can delete unwanted namespaces to manage storage on your automation hub server. You must first ensure that the namespace does not contain a collection with dependencies. Prerequisites The namespace you are deleting does not have a collection with dependencies. You have Delete namespace permissions. Procedure Log in to your private automation hub. From the navigation panel, select Collections Namespaces . Click the namespace to be deleted. Click the More Actions icon ... , then click Delete namespace . Note If the Delete namespace button is disabled, the namespace contains a collection with dependencies. Review the collections in this namespace, and delete any dependencies. See Deleting a collection on automation hub for information. The namespace that you deleted, as well as its associated collections, is now deleted and removed from the namespace list view. 2.2. Managing the publication process of internal collections in Automation Hub Use automation hub to manage and publish content collections developed within your organization. You can upload and group collections in namespaces. They need administrative approval to appear in the Published content repository. After you publish a collection, your users can access and download it for use. You can reject submitted collections that do not meet organizational certification criteria. 2.2.1. About Approval You can manage uploaded collections in automation hub by using the Approval feature located in the navigation panel. Approval Dashboard By default, the Approval dashboard lists all collections with Needs Review status. You can check these for inclusion in your Published repository. Viewing collection details You can view more information about the collection by clicking the Version number. Filtering collections Filter collections by Namespace , Collection Name or Repository , to locate content and update its status. 2.2.2. Approving collections for internal publication You can approve collections uploaded to individual namespaces for internal publication and use. All collections awaiting review are located under the Approval tab in the Staging repository. Prerequisites You have Modify Ansible repo content permissions. Procedure From the navigation panel, select Collections Approval . Collections requiring approval have the status Needs review . Select a collection to review. Click the Version to view the contents of the collection. Click Certify to approve the collection. Approved collections are moved to the Published repository where users can view and download them for use. 2.2.3. Rejecting collections uploaded for review You can reject collections uploaded to individual namespaces. All collections awaiting review are located under the Approval tab in the Staging repository. Collections requiring approval have the status Needs review . Click the Version to view the contents of the collection. Prerequisites You have Modify Ansible repo content permissions. Procedure From the navigation panel, select Collections Approval . Locate the collection to review. Click Reject to decline the collection. Collections you decline for publication are moved to the Rejected repository. 2.3. Repository management with automation hub As an automation hub administrator, you can create, edit, delete, and move automation content collections between repositories. 2.3.1. Types of repositories in automation hub In automation hub you can publish collections to two types of repositories, depending on whether you want your collection to be verified: Staging repositories Any user with permission to upload to a namespace can publish collections into these repositories. Collections in these repositories are not available in the search page. Instead, they are displayed on the approval dashboard for an administrator to verify. Staging repositories are marked with the pipeline=staging label. Custom repositories Any user with write permissions on the repository can publish collections to these repositories. Custom repositories can be public where all users can see them, or private where only users with view permissions can see them. These repositories are not displayed on the approval dashboard. If the repository owner enables search, the collection can appear in search results. By default, automation hub ships with one staging repository that is automatically used when a repository is not specified for uploading collections. Users can create new staging repositories during repository creation . 2.3.2. Approval pipeline for custom repositories in automation hub In automation hub you can approve collections into any repository marked with the pipeline=approved label. By default, automation hub includes one repository for approved content, but you have the option to add more from the repository creation screen. You cannot directly publish into a repository marked with the pipeline=approved label. A collection must first go through a staging repository and be approved before being published into a 'pipleline=approved' repository. Auto approval When auto approve is enabled, any collection you upload to a staging repository is automatically promoted to all of the repositories marked as pipeline=approved . Approval required When auto approve is disabled, the administrator can view the approval dashboard and see collections that have been uploaded into any of the staging repositories. Clicking Approve displays a list of approved repositories. From this list, the administrator can select one or more repositories to which the content should be promoted. If only one approved repository exists, the collection is automatically promoted into it and the administrator is not prompted to select a repository. Rejection Rejected collections are automatically placed into the rejected repository, which is pre-installed. 2.3.3. Role based access control to restrict access to custom repositories Use Role Based Access Control (RBAC) to restrict user access to custom repositories by defining access permissions based on user roles. By default, users can view all public repositories in their automation hub, but they cannot modify a repository unless their role allows them access to do so. The same logic applies to other operations on the repository. For example, you can remove a user's ability to download content from a custom repository by changing their role permissions. See Configuring user access for your private automation hub for information about managing user access in automation hub. 2.3.4. Creating a custom repository in automation hub When you use Red Hat Ansible Automation Platform to create a repository, you can configure the repository to be private or hide it from search results. Procedure Log in to automation hub. From the navigation panel, select Collection Repositories . Click Add repository . Enter a Repository name . In the Description field, describe the purpose of the repository. To retain versions of your repository each time you make a change, select Retained number of versions . The number of retained versions can range anywhere between 0 and unlimited. To save all versions, leave this set to null. Note If you have a problem with a change to your custom repository, you can revert to a different repository version that you have retained. In the Pipeline field, select a pipeline for the repository. This option defines who can publish a collection into the repository. Staging Anyone is allowed to publish automation content into the repository. Approved Collections added to this repository are required to go through the approval process by way of the staging repository. When auto approve is enabled, any collection uploaded to a staging repository is automatically promoted to all of the approved repositories. None Any user with permissions on the repository can publish to the repository directly, and the repository is not part of the approval pipeline. Optional: To hide the repository from search results, select Hide from search . This option is selected by default. Optional: To make the repository private, select Make private . This hides the repository from anyone who does not have permissions to view the repository. To sync the content from a remote repository into this repository, select Remote and select the remote that contains the collections you want included in your custom repository. For more information, see Repository sync . Click Save . steps After the repository is created, the details page is displayed. From here, you can provide access to your repository, review or add collections, and work with the saved versions of your custom repository. 2.3.5. Providing access to a custom automation hub repository By default, private repositories and the automation content collections are hidden from all users in the system. Public repositories can be viewed by all users, but cannot be modified. Use this procedure to provide access to your custom repository. Procedure Log in to private automation hub. From the navigation panel, select Collection Repositories . Locate your repository in the list and click the More Actions icon ... , then select Edit . Select the Access tab. Select a group for Repository owners . See Configuring user access for your private automation hub for information about implementing user access. Select the roles you want assigned to the selected group. Click Save . 2.3.6. Adding collections to an automation hub repository After you create your repository, you can begin adding automation content collections to it. Procedure From the navigation panel, select Collection Repositories . Locate your repository in the list and click the More Actions icon ... , then select Edit . Select the Collections version tab. Click Add Collection and select the collections that you want to add to your repository. Click Select . 2.3.7. Revert to a different automation hub repository version When automation content collections are added or removed from a repository, a new version is created. If a change to your repository causes a problem, you can revert to a version. Reverting is a safe operation and does not delete collections from the system, but rather, changes the content associated with the repository. The number of versions saved is defined in the Retained number of versions setting when a repository is created . Procedure Log in to private automation hub. From the navigation panel, select Collection Repositories . Locate your repository in the list and click the More Actions icon ... , then select Edit . Locate the version you want to revert to and click the More Actions icon ... , and select Revert to this version . Click Revert . 2.3.8. Managing remote configurations in automation hub You can set up remote configurations to any server that is running automation hub. Remote configurations allow you to sync content to your custom repositories from an external collection source. 2.3.8.1. Creating a remote configuration in automation hub You can use Red Hat Ansible Automation Platform to create a remote configuration to an external collection source. Then, you can sync the content from those collections to your custom repositories. Procedure Log in to automation hub. From the navigation panel, select Collections Remotes . Click Add Remote . Enter a Name for the remote configuration. Enter the URL for the remote server, including the path for the specific repository. Note To find the remote server URL and repository path, navigate to Collection Repositories , select your repository, and click Copy CLI configuration . Configure the credentials to the remote server by entering a Token or Username and Password required to access the external collection. Note To generate a token from the navigation panel, select Collections API token , click Load token and copy the token that is loaded. To access collections from console.redhat.com, enter the SSO URL to sign in to the identity provider (IdP). Select or create a YAML requirements file to identify the collections and version ranges to synchronize with your custom repository. For example, to download only the kubernetes and AWS collection versions 5.0.0 or later the requirements file would look like this: Collections: - name: community.kubernetes - name: community.aws version:">=5.0.0" Note All collection dependencies are downloaded during the Sync process. Optional: To configure your remote further, use the options available under Advanced configuration : If there is a corporate proxy in place for your organization, enter a Proxy URL , Proxy Username and Proxy Password . Enable or disable transport layer security using the TLS validation checkbox. If digital certificates are required for authentication, enter a Client key and Client certificate . If you are using a self-signed SSL certificate for your server, enter the PEM encoded client certificate used for authentication in the CA certificate field. To accelerate the speed at which collections in this remote can be downloaded, specify the number of collections that can be downloaded in tandem in the Download concurrency field. To limit the number of queries per second on this remote, specify a Rate Limit . Note Some servers can have a specific rate limit set, and if exceeded, synchronization fails. 2.3.8.2. Providing access to a remote configuration After you create a remote configuration, you must provide access to it before anyone can use it. Procedure Log in to private automation hub. From the navigation panel, select Collections Remotes . Locate your repository in the list, click the More Actions icon ... , and select Edit . Select the Access tab. Select a group for Repository owners . See Configuring user access for your private automation hub for information about implementing user access. Select the appropriate roles for the selected group. Click Save . 2.3.9. Synchronizing repositories in automation hub You can distribute relevant automation content collections to your users by synchronizing repositories from one automation hub to another. To ensure you have the latest collection updates, synchronize your custom repository with the remote regularly. Procedure Log in to automation hub. From the navigation panel, select Collection Repositories . Locate your repository in the list and click Sync . All collections in the configured remote are downloaded to your custom repository. To check the status of the collection sync, select Task Management from the navigation panel. Note To limit repository synchronization to specific collections within a remote, you can identify specific collections to be pulled by using a requirements.yml file. See Create a remote for more information. Additional resources For more information about using requirements files, see Install multiple collections with a requirements file in the Using Ansible collections guide. 2.3.10. Exporting and importing collections in automation hub Ansible automation hub stores automation content collections within repositories. These collections are versioned by the automation content creator. Many versions of the same collection can exist in the same or different repositories at the same time. Collections are stored as .tar files that can be imported and exported. This storage format ensures that the collection you are importing to a new repository is the same one that was originally created and exported. 2.3.10.1. Exporting an automation content collection in automation hub After collections are finalized, you can import them to a location where they can be distributed to others across your organization. Procedure Log in to private automation hub. From the navigation panel, select Collections Collections . The Collections page displays all collections across all repositories. You can search for a specific collection. Select the collection that you want to export. The collection details page opens. From the Install tab, select Download tarball . The .tar file is downloaded to your default browser downloads folder. You can now import it to the location of your choosing. 2.3.10.2. Importing an automation content collection in automation hub As an automation content creator, you can import a collection to use in a custom repository. To use a collection in your custom repository, you must first import the collection into your namespace so the automation hub administrator can approve it. Procedure Log in to automation hub. From the navigation panel, select Collections Namespaces . The Namespaces page displays all of the namespaces available. Click View Collections . Click Upload Collection . Navigate to the collection tarball file, select the file and click Open . Click Upload . The My Imports screen displays a summary of tests and notifies you if the collection upload is successful or has failed. Note If the collection is not approved, it is not displayed in the published repository. Additional resources See Approval pipeline for more information about collection and repository approvals.
[ "Collections: - name: community.kubernetes - name: community.aws version:\">=5.0.0\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/managing_content_in_automation_hub/managing-collections-hub
Chapter 4. Workstation Security
Chapter 4. Workstation Security Securing a Linux environment begins with the workstation. Whether locking down a personal machine or securing an enterprise system, sound security policy begins with the individual computer. After all, a computer network is only as secure as its weakest node. 4.1. Evaluating Workstation Security When evaluating the security of a Red Hat Enterprise Linux workstation, consider the following: BIOS and Boot Loader Security - Can an unauthorized user physically access the machine and boot into single user or rescue mode without a password? Password Security - How secure are the user account passwords on the machine? Administrative Controls - Who has an account on the system and how much administrative control do they have? Available Network Services - What services are listening for requests from the network and should they be running at all? Personal Firewalls - What type of firewall, if any, is necessary? Security Enhanced Communication Tools - Which tools should be used to communicate between workstations and which should be avoided?
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-wstation
10.4. Performance Tools
10.4. Performance Tools Red Hat Enterprise Linux 7 includes updates to the most recent versions of several performance tools, such as oprofile , papi , and elfutils , bringing performance, portability, and functionality improvements. Moreover, Red Hat Enterprise Linux 7 premiers: Support for Performance Co-Pilot; SystemTap support for (DynInst-based) instrumentation that runs entirely in unprivileged user space, as well as efficient (Byteman-based) pinpoint probing of Java applications; Valgrind support for hardware transactional memory and improvements in modeling vector instructions. 10.4.1. Performance Co-Pilot Red Hat Enterprise Linux 7 introduces support for Performance Co-Pilot (PCP), a suite of tools, services, and libraries for acquisition, archiving, and analysis of system-level performance measurements. Its light-weight, distributed architecture makes it particularly well suited to centralized analysis of complex systems. Performance metrics can be added using the Python, Perl, C++ and C interfaces. Analysis tools can use the client APIs (Python, C++, C) directly, and rich web applications can explore all available performance data using a JSON interface. For further information, see the Index of Performance Co-Pilot (PCP) articles, solutions, tutorials and white papers on the Customer Portal, or consult the extensive manual pages in the pcp and pcp-libs-devel packages. The pcp-doc package installs documentation in the /usr/share/doc/pcp-doc/* directory, which also includes the Performance Co-Pilot User's and Administrator's Guide as well as the Performance Co-Pilot Programmer's Guide. 10.4.2. SystemTap Red Hat Enterprise Linux 7 includes systemtap version 2.4, which brings several new capabilities. These include optional pure user-space script execution, richer and more efficient Java probing, virtual machine probing, improved error messages, and a number of bug fixes and new features. In particular, the following: Using the dyninst binary-editing library, SystemTap can now execute some scripts purely at user-space level; no kernel or root privileges are used. This mode, selected using the stap --dyninst i command, enables only those types of probes or operations that affect only the user's own processes. Note that this mode is incompatible with programs that throw C++ exceptions; A new way of injecting probes into Java applications is supported in conjunction with the byteman tool. New SystemTap probe types, java("com.app").class(" class_name ").method(" name ( signature )").* , enable probing of individual method enter and exit events in an application, without system-wide tracing; A new facility has been added to the SystemTap driver tooling to enable remote execution on a libvirt-managed KVM instance running on a server. It enables automated and secure transfer of a compiled SystemTap script to a virtual machine guest across a dedicated secure virtio-serial link. A new guest-side daemon loads the scripts and transfers their output back to the host. This way is faster and does not require IP-level networking connection between the host and the guest. To test this function, run the following command: In addition, a number of improvements have been made to SystemTap's diagnostic messages: Many error messages now contain cross-references to the related manual pages. These pages explain the errors and suggest corrections; If a script input is suspected to contain typographic errors, a sorted suggestion list is offered to the user. This suggestion facility is used in a number of contexts when user-specified names may mismatch acceptable names, such as probed function names, markers, variables, files, aliases, and others; Diagnostic duplicate-elimination has been improved; ANSI coloring has been added to make messages easier to understand. 10.4.3. Valgrind Red Hat Enterprise Linux 7 includes Valgrind , an instrumentation framework that includes a number of tools to profile applications. This version is based on the Valgrind 3.9.0 release and includes numerous improvements relative to the Red Hat Enterprise Linux 6 and Red Hat Developer Toolset 2.1 counterparts, which were based on Valgrind 3.8.1. Notable new features of Valgrind included in Red Hat Enterprise Linux 7 are the following: Support for IBM System z Decimal Floating Point instructions on hosts that have the DFP facility installed; Support for IBM POWER8 (Power ISA 2.07) instructions; Support for Intel AVX2 instructions. Note that this is available only on 64-bit architectures; Initial support for Intel Transactional Synchronization Extensions, both Restricted Transactional Memory (RTM) and Hardware Lock Elision (HLE); Initial support for Hardware Transactional Memory on IBM PowerPC; The default size of the translation cache has been increased to 16 sectors, reflecting the fact that large applications require instrumentation and storage of huge amounts of code. For similar reasons, the number of memory mapped segments that can be tracked has been increased by a factor of 6. The maximum number of sectors in the translation cache can be controlled by the new flag --num-transtab-sectors ; Valgrind no longer temporarily creates a mapping of the entire object to read from it. Instead, reading is done through a small fixed sized buffer. This avoids virtual memory spikes when Valgrind reads debugging information from large shared objects; The list of used suppressions (displayed when the -v option is specified) now shows, for each used suppression, the file name and line number where the suppression is defined; A new flag, --sigill-diagnostics can now be used to control whether a diagnostic message is printed when the just-in-time (JIT) compiler encounters an instruction it cannot translate. The actual behavior - delivery of the SIGILL signal to the application - is unchanged. The Memcheck tool has been improved with the following features: Improvements in handling of vector code, leading to significantly fewer false error reports. Use the --partial-loads-ok=yes flag to get the benefits of these changes; Better control over the leak checker. It is now possible to specify which kind of leaks (definite, indirect, possible, and reachable) should be displayed, which should be regarded as errors, and which should be suppressed by a given leak suppression. This is done using the options --show-leak-kinds=kind1,kind2,.. , --errors-for-leak-kinds=kind1,kind2,.. and an optional match-leak-kinds: line in suppression entries, respectively; Note that generated leak suppressions contain this new line and are therefore more specific than in releases. To get the same behavior as releases, remove the match-leak-kinds: line from generated suppressions before using them; Reduced possible leak reports from the leak checker by the use of better heuristics. The available heuristics provide detection of valid interior pointers to std::stdstring, to new[] allocated arrays with elements having destructors, and to interior pointers pointing to an inner part of a C++ object using multiple inheritance. They can be selected individually using the --leak-check-heuristics=heur1,heur2,... option; Better control of stacktrace acquisition for heap-allocated blocks. Using the --keep-stacktraces option, it is possible to control independently whether a stack trace is acquired for each allocation and deallocation. This can be used to create better "use after free" errors or to decrease Valgrind's resource consumption by recording less information; Better reporting of leak suppression usage. The list of suppressions used (shown when the -v option is specified) now shows, for each leak suppression, how many blocks and bytes it suppressed during the last leak search. The Valgrind GDB server integration has been improved with the following monitoring commands: A new monitor command, v.info open_fds , that gives the list of open file descriptors and additional details; A new monitor command, v.info execontext , that shows information about the stack traces recorded by Valgrind; A new monitor command, v.do expensive_sanity_check_general , to run certain internal consistency checks.
[ "stap --remote=libvirt:// MyVirtualMachine" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-compiler_and_tools-performance_tools
4.29. cluster and gfs2-utils
4.29. cluster and gfs2-utils 4.29.1. RHBA-2012:1190 - cluster and gfs2-utils bug fix update Updated cluster and gfs2-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Using redundant hardware, shared disk storage, power management, and robust cluster communication and application failover mechanisms, a cluster can meet the needs of the enterprise market. Bug Fix BZ# 849048 Previously, it was not possible to specify start-up options to the dlm_controld daemon. As a consequence, certain features were not working as expected. With this update, it is possible to use the /etc/sysconfig/cman configuration file to specify dlm_controld start-up options, thus fixing this bug. All users of cluster and gfs2-utils are advised to upgrade to these updated packages, which fix this bug. 4.29.2. RHBA-2011:1516 - cluster and gfs2-utils bug fix and enhancement update Updated cluster and gfs2-utils packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The cluster packages contain the core clustering libraries for Red Hat High Availability as well as utilities to maintain GFS2 file systems for users of Red Hat Resilient Storage. Bug Fixes BZ# 707115 The cluster and gfs2-utils packages have been upgraded to upstream version 3.0.12.1, which provides a number of bug fixes over the version. BZ# 713977 Previously, when a custom multicast address was configured, the configuration parser incorrectly set the default value of the time-to-live (TTL) variable for multicast packet to 0. As a consequence, cluster nodes were not able to communicate with each other. With this update, the default TTL value is set to 1, which fixes the problem. BZ# 726777 A section describing the "suborg" option for the fence_cisco_usc agent was not present in the RELAX NG schema which is used to validate the cluster.conf file. As a consequence, validation of cluster.conf failed even if the file was valid. The suborg section has been added to the RELAX NG schema and cluster.conf is now validated correctly. BZ# 707091 Building the resource group index for a new GFS2 file system using the mkfs.gfs2 utility used all the space allocated. If the file system filled up completely, no room was left to write a new rindex entry. As a consequence, the gfs2_grow utility was unable to expand the file system. The mkfs.gfs2 utility has been modified so that enough space is now allocated for the entire rindex file, and one extra rindex entry. The gfs2_grow source code has been modified to utilize the unused rindex space. As a result, gfs2_grow is now able to expand a completely full GFS2 file system. BZ# 678585 GFS2 POSIX (Portable Operating System Interface) lock operations (implemented in Distributed Lock Manager, also known as DLM) are not interruptible when they wait for another POSIX lock. Previously, processes that created a deadlock with POSIX locks could not be killed to resolve the problem, and one node had to be reset. DLM now uses a new kernel feature that allows the waiting process to be killed, and information about the killed process is now passed to the dlm_controld daemon to be cleaned up. Processes deadlocked on GFS2 POSIX locks can now be recovered by killing one or more of them. BZ# 719135 Prior to this update, boundaries for the locktable and label fields in the GFS2 superblock were not properly checked by the tunegfs2 tool. As a consequence, running the "gfs2_tool sb" command could terminate unexpectedly with buffer overflow. In addition, invalid characters could be printed when using tunegfs2 to change locktable or label to a minimum or maximum length (63 characters). The tunegfs tool has been modified to check the correct boundaries of the locktable and label fields. As a result, tunegfs2 no longer creates invalid locktables or labels, and therefore gfs2_tool prints the superblock values properly. BZ# 740385 When executing the cman utility by using the init script with enabled debugging, a file descriptor leaked. The file pointed to the file descriptor would continue to grow endlessly, filling up the /tmp file system. This update ensures that the file descriptor is closed after a successful cman startup. Space in /tmp is now released correctly. BZ# 695795 The cman utility implements a complex set of checks to configure the Totem protocol. One of the checks that copies the configuration data was incorrect and the transport protocol option was not handled correctly as a consequence. A patch has been applied to address this issue and cman now handles the transport option properly. BZ# 679566 When the user executed the "gfs2_edit savemeta" command to save the metadata for a target GFS2 file system, not all of the directory information was saved for large directories. If the metadata was restored to another device, the fsck.gfs2 tool found directory corruption because of a missing leaf block. This was due to gfs2_edit treating the directory leaf index (also known as the directory hash table) like a normal data file. With this update, gfs2_edit's savemeta function is modified to actually read all the data (the directory hash table) for large directories and traverse the hash table, saving all the leaf blocks. Now, all leaf blocks are saved properly. BZ# 679080 When the fsck.gfs2 tool was resolving block references and no valid reference was found, the reference list became empty. As a consequence, fsck.gfs2 check in pass1b terminated unexpectedly with a segmentation fault. With this update, pass1b is modified to check that the list is empty. The segmentation fault no longer occurs and fsck.gfs2 proceeds as expected. BZ# 731775 The dlm_controld daemon passed error results back to the kernel for POSIX unlock operations flagged with CLOSE. As a consequence, the kernel displayed the "dlm: dev_write no op" error messages, most of them when using non-POSIX locks, flocks. The dlm_controld daemon has been fixed to not pass error results to the kernel for POSIX unlock operations flagged with CLOSE. As a result, error messages no longer appear. BZ# 729071 Previously, the mount.gfs2 utility passed the "loop" option to the GFS2 kernel module which treated it as an invalid option. Mounting a GFS2 file system on loopback devices failed with an "Invalid argument" error message. With this update, mount.gfs2 is modified to avoid passing the "loop" option to the kernel. Mounting GFS2 systems on loopback devices now works as expected. BZ# 728230 Missing sanity checks related to the length of a cluster name caused the cman utility to fail to start. The correct sanity checks have been implemented with this update. The cman utility starts successfully and informs the user of the incorrect value of the cluster name, if necessary. BZ# 726065 The XML format requires special handling of certain special characters. Handling of these characters was not implemented correctly, which caused the cluster.conf file to not function as expected. Correct handling of the characters has been implemented and cluster.conf now works as expected. BZ# 706141 The exact device/mount paths were not compared due to incorrect logic in mount.gfs2 when trying to find mtab entries for deletion. The original entry was not found during remounts and therefore was not deleted. This resulted in double mtab entries. With this update, the realpath() function is used on the device/mount paths so that they match the content of mtab. As a result, the correct original mtab entry is deleted during a remount, and a replacement entry with the new mount options is inserted in its place. BZ# 720668 Previously, mkfs.gfs2 treated normal files incorrectly as if they were block devices. Attempting to create a GFS2 file system on a normal file caused mkfs.gfs2 to fail with a "not a block device" error message. Additional checks have been added so that mkfs.gfs2 does not call functions specific for block devices on normal files. GFS2 file systems can now be created on normal files. However, use of GFS2 in such cases is not recommended. BZ# 719126 The tunegfs2 command line usage message was not updated to reflect the available arguments which are documented in the man page. As a consequence, tunegfs2 printed an inaccurate usage message. The usage message has been updated and tunegfs2 now prints an accurate message. BZ# 719124 Previously, certain argument validation functions did not return error values, and tunegfs2 therefore printed confusing error messages instead of exiting quietly. Error handling has been improved in these validation functions, and tunegfs2 now exits quietly instead of printing the confusing messages. BZ# 694823 Previously, the gfs2_tool command printed the UUID (Universally Unique Identifier) output in uppercase. Certain applications expecting the output being in lowercase (such as mount) could have malfunctioned as a consequence. With this update, gfs2_tool is modified to print UUIDs in lowercase so that they are in a commonly accepted format. BZ# 735917 The qdisk daemon did not allow cman to upgrade the quorum disk device name. The quorum disk device name was not updated when the device was changed and, in very rare cases, the number of qdiskd votes would therefore not be correct. A new quorum API call has been implemented to update the name and votes of a quorum device. As a result, quorum disk device names and votes are updated consistently and faster than before. BZ# 683104 Prior to this update, the fsck.gfs2 utility used the number of entries in the journal index to look for missing journals. As a consequence, if more than one journal was missing, not all journals were rebuilt and subsequent runs of fsck.gfs2 were needed to recover all the journals. Each node needs its own journal; fsck.gfs2 has therefore been modified to use the "per_node" system directory to determine the correct number of journals to repair. As a result, fsck.gfs2 now repairs all the journals in one run. BZ# 663397 Previously, token timeout intervals of corosync were larger than the time it took a failed node to rejoin the cluster. Consequently, corosync did not detect that a node had failed until it rejoined. The failed node had been added again before the dlm_controld daemon asked corosync for the new member list, but dlm_controld did not notice this change. This eventually caused the DLM (Distributed Lock Manager) lockspace operations to get stuck. With this update, dlm_controld can notice that a node was removed and added between checks by looking for a changed incarnation number. Now, dlm_controld can properly handle nodes that are quickly removed and added again during large token timeouts. BZ# 732991 Previously, if a cluster was configured with a redundant corosync ring, the dlm_controld daemon would log harmless EEXIST errors, "mkdir failed: 17". This update removes these error messages so that they no longer appear. Enhancements BZ# 733345 The corosync IPC port allows, when configured correctly, non-privileged users to access corosync services. Prior to this update, the cman utility did not handle such connections correctly. As a consequence, users were not able to configure unprivileged access to corosync when it was executed using cman. This update adds support to cman to configure unprivileged access. As a result, configured users and groups can now access corosync services without root privileges. BZ# 680930 This update introduces dynamic schema generation, which provides a lot of flexibility for end users. Users can plug into Red Hat Enterprise Linux High Availability Add-On custom resource and fence agents, and still retain the possibility to validate their cluster.conf file against those agents. BZ# 732635 , BZ# 735912 This update adds support for Redundant Ring Protocol, which aligns the default configuration of cman with corosync. Note that this enhancement is included as a Technology Preview. BZ# 702313 Previously, gfs2_edit saved GFS2 metadata uncompressed. Saved GFS2 metadata sets could have filled up a lot of storage space, and transferring them (for example, for support and debugging) would be slow. This update adds gzip compression to the metadata saving and restoring functions of gfs2_edit. GFS2 metadata sets are now compressed when saving and decompressed when restoring them. The user can specify the compression level with a command line option. BZ# 704178 With this update, the tunegfs2 utility replaces the superblock manipulating feature of gfs2_tool. BZ# 673575 Previously, the fence_scsi agent did not reboot a node when it was fenced. As a consequence, the node had to be rebooted manually before rejoining the cluster. This update provides a script for detecting loss of SCSI reservations. This can be used in conjunction with the watchdog package in order to reboot a failed host. Users of cluster and gfs2-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 4.29.3. RHBA-2012:0575 - cluster and gfs2-utils bug fix update Updated cluster and gfs2-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Cluster Manager is a collection of technologies working together to provide data integrity and the ability to maintain application availability in the event of a failure. Using redundant hardware, shared disk storage, power management, and robust cluster communication and application failover mechanisms, a cluster can meet the needs of the enterprise market. Bug Fix BZ# 820357 Prior to this update, the cmannotifyd did not correctly generate a cluster status notification message at first cluster startup. This update addresses the problem and now cmannotifyd will correctly trigger the notification hooks when the daemon is started. All users of cluster and gfs2-utils are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/cluster-and-gfs2-utils
Chapter 11. Red Hat Enterprise Linux Atomic Host
Chapter 11. Red Hat Enterprise Linux Atomic Host Included in the release of Red Hat Enterprise Linux 7.1 is Red Hat Enterprise Linux Atomic Host - a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers. It has been designed to take advantage of the powerful technology available in Red Hat Enterprise Linux 7. Red Hat Enterprise Linux Atomic Host uses SELinux to provide strong safeguards in multi-tenant environments, and provides the ability to perform atomic upgrades and rollbacks, enabling quicker and easier maintenance with less downtime. Red Hat Enterprise Linux Atomic Host uses the same upstream projects delivered via the same RPM packaging as Red Hat Enterprise Linux 7. Red Hat Enterprise Linux Atomic Host is pre-installed with the following tools to support Linux containers: Docker - For more information, see Get Started with Docker Formatted Container Images on Red Hat Systems . Kubernetes , flannel , etcd - For more information, see Get Started Orchestrating Containers with Kubernetes . Red Hat Enterprise Linux Atomic Host makes use of the following technologies: OSTree and rpm-OSTree - These projects provide atomic upgrades and rollback capability. systemd - The powerful new init system for Linux that enables faster boot times and easier orchestration. SELinux - Enabled by default to provide complete multi-tenant security. New features in Red Hat Enterprise Linux Atomic Host 7.1.4 The iptables-service package has been added. It is now possible to enable automatic "command forwarding" when commands that are not found on Red Hat Enterprise Linux Atomic Host, are seamlessly retried inside the RHEL Atomic Tools container. The feature is disabled by default (it requires a RHEL Atomic Tools pulled on the system). To enable it, uncomment the export line in the /etc/sysconfig/atomic file so it looks like this: The atomic command: You can now pass three options ( OPT1 , OPT2 , OPT3 ) to the LABEL command in a Dockerfile. Developers can add environment variables to the labels to allow users to pass additional commands using atomic . The following is an example from a Dockerfile: This line means that running the following command: is identical to running You can now use USD{NAME} and USD{IMAGE} anywhere in your label, and atomic will substitute it with an image and a name. The USD{SUDO_UID} and USD{SUDO_GID} options are set and can be used in image LABEL . The atomic mount command attempts to mount the file system belonging to a given container/image ID or image to the given directory. Optionally, you can provide a registry and tag to use a specific version of an image. New features in Red Hat Enterprise Linux Atomic Host 7.1.3 Enhanced rpm-OSTee to provide a unique machine ID for each machine provisioned. Support for remote-specific GPG keyring has been added, specifically to associate a particular GPG key with a particular OSTree remote. the atomic command: atomic upload - allows the user to upload a container image to a docker repository or to a Pulp/Crane instance. atomic version - displays the "Name Version Release" container label in the following format: ContainerID;Name-Version-Release;Image/Tag atomic verify - inspects an image to verify that the image layers are based on the latest image layers available. For example, if you have a MongoDB application based on rhel7-1.1.2 and a rhel7-1.1.3 base image is available, the command will inform you there is a later image. A dbus interface has been added to verify and version commands. New features in Red Hat Enterprise Linux Atomic Host 7.1.2 The atomic command-line interface is now available for Red Hat Enterprise Linux 7.1 as well as Red Hat Enterprise Linux Atomic Host. Note that the feature set is different on both systems. Only Red Hat Enterprise Linux Atomic Host includes support for OSTree updates. The atomic run command is supported on both platforms. atomic run allows a container to specify its run-time options via the RUN meta-data label. This is used primarily with privileges. atomic install and atomic uninstall allow a container to specify install and uninstall scripts via the INSTALL and UNINSTALL meta-data labels. atomic now supports container upgrade and checking for updated images. The iscsi-initiator-utils package has been added to Red Hat Enterprise Linux Atomic Host. This allows the system to mount iSCSI volumes; Kubernetes has gained a storage plugin to set up iSCSI mounts for containers. You will also find Integrity Measurement Architecture (IMA), audit and libwrap available from systemd . Important Red Hat Enterprise Linux Atomic Host is not managed in the same way as other Red Hat Enterprise Linux 7 variants. Specifically: The Yum package manager is not used to update the system and install or update software packages. For more information, see Installing Applications on Red Hat Enterprise Linux Atomic Host . There are only two directories on the system with write access for storing local system configuration: /etc/ and /var/ . The /usr/ directory is mounted read-only. Other directories are symbolic links to a writable location - for example, the /home/ directory is a symlink to /var/home/ . For more information, see Red Hat Enterprise Linux Atomic Host File System . The default partitioning dedicates most of available space to containers, using direct Logical Volume Management (LVM) instead of the default loopback. For more information, see Getting Started with Red Hat Enterprise Linux Atomic Host . Red Hat Enterprise Linux Atomic Host 7.1.1 provides new versions of Docker and etcd , and maintenance fixes for the atomic command and other components.
[ "export TOOLSIMG=rhel7/rhel-tools", "LABEL docker run USD{OPT1}USD{IMAGE}", "atomic run --opt1=\"-ti\" image_name", "docker run -ti image_name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-atomic_host
Chapter 18. Managing host groups using Ansible playbooks
Chapter 18. Managing host groups using Ansible playbooks To learn more about host groups in Identity Management (IdM) and using Ansible to perform operations involving host groups in Identity Management (IdM), see the following: Host groups in IdM Ensuring the presence of IdM host groups Ensuring the presence of hosts in IdM host groups Nesting IdM host groups Ensuring the presence of member managers in IdM host groups Ensuring the absence of hosts from IdM host groups Ensuring the absence of nested host groups from IdM host groups Ensuring the absence of member managers from IdM host groups 18.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 18.2. Ensuring the presence of IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are created in IdM using the ipa hostgroup-add command. The result of adding a host group to IdM is the state of the host group being present in IdM. Because of the Ansible reliance on idempotence, to add a host group to IdM using Ansible, you must create a playbook in which you define the state of the host group as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. For example, to ensure the presence of a host group named databases , specify name: databases in the - ipahostgroup task. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-present.yml file. In the playbook, state: present signifies a request to add the host group to IdM unless it already exists there. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose presence in IdM you wanted to ensure: The databases host group exists in IdM. 18.3. Ensuring the presence of hosts in IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of hosts in host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file have been added to IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host with the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This playbook adds the db.idm.example.com host to the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself. Instead, only an attempt is made to add db.idm.example.com to databases . Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about a host group to see which hosts are present in it: The db.idm.example.com host is present as a member of the databases host group. 18.4. Nesting IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of nested host groups in Identity Management (IdM) host groups using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To ensure that a nested host group A exists in a host group B : in the Ansible playbook, specify, among the - ipahostgroup variables, the name of the host group B using the name variable. Specify the name of the nested hostgroup A with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This Ansible playbook ensures the presence of the myqsl-server and oracle-server host groups in the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself to IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group in which nested host groups are present: The mysql-server and oracle-server host groups exist in the databases host group. 18.5. Ensuring the presence of member managers in IDM host groups using Ansible Playbooks The following procedure describes ensuring the presence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group contains example_member and project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. 18.6. Ensuring the absence of hosts from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of hosts from host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host and host group information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host whose absence from the host group you want to ensure using the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook ensures the absence of the db.idm.example.com host from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to remove the databases group itself. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group and the hosts it contains: The db.idm.example.com host does not exist in the databases host group. 18.7. Ensuring the absence of nested host groups from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of nested host groups from outer host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. Specify, among the - ipahostgroup variables, the name of the outer host group using the name variable. Specify the name of the nested hostgroup with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook makes sure that the mysql-server and oracle-server host groups are absent from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to ensure the databases group itself is deleted from IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group from which nested host groups should be absent: The output confirms that the mysql-server and oracle-server nested host groups are absent from the outer databases host group. 18.8. Ensuring the absence of IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are removed from IdM using the ipa hostgroup-del command. The result of removing a host group from IdM is the state of the host group being absent from IdM. Because of the Ansible reliance on idempotence, to remove a host group from IdM using Ansible, you must create a playbook in which you define the state of the host group as absent: state: absent . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-absent.yml file. This playbook ensures the absence of the databases host group from IdM. The state: absent means a request to delete the host group from IdM unless it is already deleted. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose absence you ensured: The databases host group does not exist in IdM. 18.9. Ensuring the absence of member managers from IdM host groups using Ansible playbooks The following procedure describes ensuring the absence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or user group you are removing as member managers and the name of the host group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group does not contain example_member or project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases ipa: ERROR: databases: host group not found", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/managing-host-groups-using-ansible-playbooks_using-ansible-to-install-and-manage-identity-management
Chapter 5. AuthProviderService
Chapter 5. AuthProviderService 5.1. ExchangeToken POST /v1/authProviders/exchangeToken 5.1.1. Description 5.1.2. Parameters 5.1.2.1. Body Parameter Name Description Required Default Pattern body V1ExchangeTokenRequest X 5.1.3. Return Type V1ExchangeTokenResponse 5.1.4. Content Type application/json 5.1.5. Responses Table 5.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ExchangeTokenResponse 0 An unexpected error response. RuntimeError 5.1.6. Samples 5.1.7. Common object reference 5.1.7.1. AuthProviderRequiredAttribute RequiredAttribute allows to specify a set of attributes which ALL are required to be returned by the auth provider. If any attribute is missing within the external claims of the token issued by Central, the authentication request to this IdP is considered failed. Field Name Required Nullable Type Description Format attributeKey String attributeValue String 5.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.1.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.1.7.4. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 5.1.7.5. StorageAuthProvider Tag: 15. Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String enabled Boolean config Map of string Config holds auth provider specific configuration. Each configuration options are different based on the given auth provider type. OIDC: - \"issuer\": the OIDC issuer according to https://openid.net/specs/openid-connect-core-1_0.html#IssuerIdentifier . - \"client_id\": the client ID according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.2 . - \"client_secret\": the client secret according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.3.1 . - \"do_not_use_client_secret\": set to \"true\" if you want to create a configuration with only a client ID and no client secret. - \"mode\": the OIDC callback mode, choosing from \"fragment\", \"post\", or \"query\". - \"disable_offline_access_scope\": set to \"true\" if no offline tokens shall be issued. - \"extra_scopes\": a space-delimited string of additional scopes to request in addition to \"openid profile email\" according to https://www.rfc-editor.org/rfc/rfc6749.html#section-3.3 . OpenShift Auth: supports no extra configuration options. User PKI: - \"keys\": the trusted certificates PEM encoded. SAML: - \"sp_issuer\": the service provider issuer according to https://datatracker.ietf.org/doc/html/rfc7522#section-3 . - \"idp_metadata_url\": the metadata URL according to https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . - \"idp_issuer\": the IdP issuer. - \"idp_cert_pem\": the cert PEM encoded for the IdP endpoint. - \"idp_sso_url\": the IdP SSO URL. - \"idp_nameid_format\": the IdP name ID format. IAP: - \"audience\": the audience to use. loginUrl String The login URL will be provided by the backend, and may not be specified in a request. validated Boolean extraUiEndpoints List of string UI endpoints which to allow in addition to ui_endpoint . I.e., if a login request is coming from any of these, the auth request will use these for the callback URL, not ui_endpoint. active Boolean requiredAttributes List of AuthProviderRequiredAttribute traits StorageTraits claimMappings Map of string Specifies claims from IdP token that will be copied to Rox token attributes. Each key in this map contains a path in IdP token we want to map. Path is separated by \".\" symbol. For example, if IdP token payload looks like: { \"a\": { \"b\" : \"c\", \"d\": true, \"e\": [ \"val1\", \"val2\", \"val3\" ], \"f\": [ true, false, false ], \"g\": 123.0, \"h\": [ 1, 2, 3] } } then \"a.b\" would be a valid key and \"a.z\" is not. We support the following types of claims: * string(path \"a.b\") * bool(path \"a.d\") * string array(path \"a.e\") * bool array (path \"a.f.\") We do NOT support the following types of claims: * complex claims(path \"a\") * float/integer claims(path \"a.g\") * float/integer array claims(path \"a.h\") Each value in this map contains a Rox token attribute name we want to add claim to. If, for example, value is \"groups\", claim would be found in \"external_user.Attributes.groups\" in token. Note: we only support this feature for OIDC auth provider. lastUpdated Date Last updated indicates the last time the auth provider has been updated. In case there have been tokens issued by an auth provider before this timestamp, they will be considered invalid. Subsequently, all clients will have to re-issue their tokens (either by refreshing or by an additional login attempt). date-time 5.1.7.6. StorageServiceIdentity Field Name Required Nullable Type Description Format serialStr String serial String int64 id String type StorageServiceType UNKNOWN_SERVICE, SENSOR_SERVICE, CENTRAL_SERVICE, CENTRAL_DB_SERVICE, REMOTE_SERVICE, COLLECTOR_SERVICE, MONITORING_UI_SERVICE, MONITORING_DB_SERVICE, MONITORING_CLIENT_SERVICE, BENCHMARK_SERVICE, SCANNER_SERVICE, SCANNER_DB_SERVICE, ADMISSION_CONTROL_SERVICE, SCANNER_V4_INDEXER_SERVICE, SCANNER_V4_MATCHER_SERVICE, SCANNER_V4_DB_SERVICE, initBundleId String 5.1.7.7. StorageServiceType Enum Values UNKNOWN_SERVICE SENSOR_SERVICE CENTRAL_SERVICE CENTRAL_DB_SERVICE REMOTE_SERVICE COLLECTOR_SERVICE MONITORING_UI_SERVICE MONITORING_DB_SERVICE MONITORING_CLIENT_SERVICE BENCHMARK_SERVICE SCANNER_SERVICE SCANNER_DB_SERVICE ADMISSION_CONTROL_SERVICE SCANNER_V4_INDEXER_SERVICE SCANNER_V4_MATCHER_SERVICE SCANNER_V4_DB_SERVICE 5.1.7.8. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 5.1.7.9. StorageUserInfo Field Name Required Nullable Type Description Format username String friendlyName String permissions UserInfoResourceToAccess roles List of StorageUserInfoRole 5.1.7.10. StorageUserInfoRole Role is wire compatible with the old format of storage.Role and hence only includes role name and associated permissions. Field Name Required Nullable Type Description Format name String resourceToAccess Map of StorageAccess 5.1.7.11. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 5.1.7.12. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 5.1.7.13. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 5.1.7.14. UserInfoResourceToAccess ResourceToAccess represents a collection of permissions. It is wire compatible with the old format of storage.Role and replaces it in places where only aggregated permissions are required. Field Name Required Nullable Type Description Format resourceToAccess Map of StorageAccess 5.1.7.15. V1AuthStatus Field Name Required Nullable Type Description Format userId String serviceId StorageServiceIdentity expires Date date-time refreshUrl String authProvider StorageAuthProvider userInfo StorageUserInfo userAttributes List of V1UserAttribute idpToken String Token returned to ACS by the underlying identity provider. This field is set only in a few, specific contexts. Do not rely on this field being present in the response. 5.1.7.16. V1ExchangeTokenRequest Field Name Required Nullable Type Description Format externalToken String The external authentication token. The server will mask the value of this credential in responses and logs. type String state String 5.1.7.17. V1ExchangeTokenResponse Field Name Required Nullable Type Description Format token String clientState String test Boolean user V1AuthStatus 5.1.7.18. V1UserAttribute Field Name Required Nullable Type Description Format key String values List of string 5.2. GetAuthProviders GET /v1/authProviders 5.2.1. Description 5.2.2. Parameters 5.2.2.1. Query Parameters Name Description Required Default Pattern name - null type - null 5.2.3. Return Type V1GetAuthProvidersResponse 5.2.4. Content Type application/json 5.2.5. Responses Table 5.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetAuthProvidersResponse 0 An unexpected error response. RuntimeError 5.2.6. Samples 5.2.7. Common object reference 5.2.7.1. AuthProviderRequiredAttribute RequiredAttribute allows to specify a set of attributes which ALL are required to be returned by the auth provider. If any attribute is missing within the external claims of the token issued by Central, the authentication request to this IdP is considered failed. Field Name Required Nullable Type Description Format attributeKey String attributeValue String 5.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.2.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.2.7.4. StorageAuthProvider Tag: 15. Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String enabled Boolean config Map of string Config holds auth provider specific configuration. Each configuration options are different based on the given auth provider type. OIDC: - \"issuer\": the OIDC issuer according to https://openid.net/specs/openid-connect-core-1_0.html#IssuerIdentifier . - \"client_id\": the client ID according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.2 . - \"client_secret\": the client secret according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.3.1 . - \"do_not_use_client_secret\": set to \"true\" if you want to create a configuration with only a client ID and no client secret. - \"mode\": the OIDC callback mode, choosing from \"fragment\", \"post\", or \"query\". - \"disable_offline_access_scope\": set to \"true\" if no offline tokens shall be issued. - \"extra_scopes\": a space-delimited string of additional scopes to request in addition to \"openid profile email\" according to https://www.rfc-editor.org/rfc/rfc6749.html#section-3.3 . OpenShift Auth: supports no extra configuration options. User PKI: - \"keys\": the trusted certificates PEM encoded. SAML: - \"sp_issuer\": the service provider issuer according to https://datatracker.ietf.org/doc/html/rfc7522#section-3 . - \"idp_metadata_url\": the metadata URL according to https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . - \"idp_issuer\": the IdP issuer. - \"idp_cert_pem\": the cert PEM encoded for the IdP endpoint. - \"idp_sso_url\": the IdP SSO URL. - \"idp_nameid_format\": the IdP name ID format. IAP: - \"audience\": the audience to use. loginUrl String The login URL will be provided by the backend, and may not be specified in a request. validated Boolean extraUiEndpoints List of string UI endpoints which to allow in addition to ui_endpoint . I.e., if a login request is coming from any of these, the auth request will use these for the callback URL, not ui_endpoint. active Boolean requiredAttributes List of AuthProviderRequiredAttribute traits StorageTraits claimMappings Map of string Specifies claims from IdP token that will be copied to Rox token attributes. Each key in this map contains a path in IdP token we want to map. Path is separated by \".\" symbol. For example, if IdP token payload looks like: { \"a\": { \"b\" : \"c\", \"d\": true, \"e\": [ \"val1\", \"val2\", \"val3\" ], \"f\": [ true, false, false ], \"g\": 123.0, \"h\": [ 1, 2, 3] } } then \"a.b\" would be a valid key and \"a.z\" is not. We support the following types of claims: * string(path \"a.b\") * bool(path \"a.d\") * string array(path \"a.e\") * bool array (path \"a.f.\") We do NOT support the following types of claims: * complex claims(path \"a\") * float/integer claims(path \"a.g\") * float/integer array claims(path \"a.h\") Each value in this map contains a Rox token attribute name we want to add claim to. If, for example, value is \"groups\", claim would be found in \"external_user.Attributes.groups\" in token. Note: we only support this feature for OIDC auth provider. lastUpdated Date Last updated indicates the last time the auth provider has been updated. In case there have been tokens issued by an auth provider before this timestamp, they will be considered invalid. Subsequently, all clients will have to re-issue their tokens (either by refreshing or by an additional login attempt). date-time 5.2.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 5.2.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 5.2.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 5.2.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 5.2.7.9. V1GetAuthProvidersResponse Field Name Required Nullable Type Description Format authProviders List of StorageAuthProvider 5.3. DeleteAuthProvider DELETE /v1/authProviders/{id} 5.3.1. Description 5.3.2. Parameters 5.3.2.1. Path Parameters Name Description Required Default Pattern id X null 5.3.2.2. Query Parameters Name Description Required Default Pattern force - null 5.3.3. Return Type Object 5.3.4. Content Type application/json 5.3.5. Responses Table 5.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 5.3.6. Samples 5.3.7. Common object reference 5.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.4. GetAuthProvider GET /v1/authProviders/{id} 5.4.1. Description 5.4.2. Parameters 5.4.2.1. Path Parameters Name Description Required Default Pattern id X null 5.4.3. Return Type StorageAuthProvider 5.4.4. Content Type application/json 5.4.5. Responses Table 5.4. HTTP Response Codes Code Message Datatype 200 A successful response. StorageAuthProvider 0 An unexpected error response. RuntimeError 5.4.6. Samples 5.4.7. Common object reference 5.4.7.1. AuthProviderRequiredAttribute RequiredAttribute allows to specify a set of attributes which ALL are required to be returned by the auth provider. If any attribute is missing within the external claims of the token issued by Central, the authentication request to this IdP is considered failed. Field Name Required Nullable Type Description Format attributeKey String attributeValue String 5.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.4.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.4.7.4. StorageAuthProvider Tag: 15. Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String enabled Boolean config Map of string Config holds auth provider specific configuration. Each configuration options are different based on the given auth provider type. OIDC: - \"issuer\": the OIDC issuer according to https://openid.net/specs/openid-connect-core-1_0.html#IssuerIdentifier . - \"client_id\": the client ID according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.2 . - \"client_secret\": the client secret according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.3.1 . - \"do_not_use_client_secret\": set to \"true\" if you want to create a configuration with only a client ID and no client secret. - \"mode\": the OIDC callback mode, choosing from \"fragment\", \"post\", or \"query\". - \"disable_offline_access_scope\": set to \"true\" if no offline tokens shall be issued. - \"extra_scopes\": a space-delimited string of additional scopes to request in addition to \"openid profile email\" according to https://www.rfc-editor.org/rfc/rfc6749.html#section-3.3 . OpenShift Auth: supports no extra configuration options. User PKI: - \"keys\": the trusted certificates PEM encoded. SAML: - \"sp_issuer\": the service provider issuer according to https://datatracker.ietf.org/doc/html/rfc7522#section-3 . - \"idp_metadata_url\": the metadata URL according to https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . - \"idp_issuer\": the IdP issuer. - \"idp_cert_pem\": the cert PEM encoded for the IdP endpoint. - \"idp_sso_url\": the IdP SSO URL. - \"idp_nameid_format\": the IdP name ID format. IAP: - \"audience\": the audience to use. loginUrl String The login URL will be provided by the backend, and may not be specified in a request. validated Boolean extraUiEndpoints List of string UI endpoints which to allow in addition to ui_endpoint . I.e., if a login request is coming from any of these, the auth request will use these for the callback URL, not ui_endpoint. active Boolean requiredAttributes List of AuthProviderRequiredAttribute traits StorageTraits claimMappings Map of string Specifies claims from IdP token that will be copied to Rox token attributes. Each key in this map contains a path in IdP token we want to map. Path is separated by \".\" symbol. For example, if IdP token payload looks like: { \"a\": { \"b\" : \"c\", \"d\": true, \"e\": [ \"val1\", \"val2\", \"val3\" ], \"f\": [ true, false, false ], \"g\": 123.0, \"h\": [ 1, 2, 3] } } then \"a.b\" would be a valid key and \"a.z\" is not. We support the following types of claims: * string(path \"a.b\") * bool(path \"a.d\") * string array(path \"a.e\") * bool array (path \"a.f.\") We do NOT support the following types of claims: * complex claims(path \"a\") * float/integer claims(path \"a.g\") * float/integer array claims(path \"a.h\") Each value in this map contains a Rox token attribute name we want to add claim to. If, for example, value is \"groups\", claim would be found in \"external_user.Attributes.groups\" in token. Note: we only support this feature for OIDC auth provider. lastUpdated Date Last updated indicates the last time the auth provider has been updated. In case there have been tokens issued by an auth provider before this timestamp, they will be considered invalid. Subsequently, all clients will have to re-issue their tokens (either by refreshing or by an additional login attempt). date-time 5.4.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 5.4.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 5.4.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 5.4.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 5.5. UpdateAuthProvider PATCH /v1/authProviders/{id} 5.5.1. Description 5.5.2. Parameters 5.5.2.1. Path Parameters Name Description Required Default Pattern id X null 5.5.2.2. Body Parameter Name Description Required Default Pattern body V1UpdateAuthProviderRequest X 5.5.3. Return Type StorageAuthProvider 5.5.4. Content Type application/json 5.5.5. Responses Table 5.5. HTTP Response Codes Code Message Datatype 200 A successful response. StorageAuthProvider 0 An unexpected error response. RuntimeError 5.5.6. Samples 5.5.7. Common object reference 5.5.7.1. AuthProviderRequiredAttribute RequiredAttribute allows to specify a set of attributes which ALL are required to be returned by the auth provider. If any attribute is missing within the external claims of the token issued by Central, the authentication request to this IdP is considered failed. Field Name Required Nullable Type Description Format attributeKey String attributeValue String 5.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.5.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.5.7.4. StorageAuthProvider Tag: 15. Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String enabled Boolean config Map of string Config holds auth provider specific configuration. Each configuration options are different based on the given auth provider type. OIDC: - \"issuer\": the OIDC issuer according to https://openid.net/specs/openid-connect-core-1_0.html#IssuerIdentifier . - \"client_id\": the client ID according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.2 . - \"client_secret\": the client secret according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.3.1 . - \"do_not_use_client_secret\": set to \"true\" if you want to create a configuration with only a client ID and no client secret. - \"mode\": the OIDC callback mode, choosing from \"fragment\", \"post\", or \"query\". - \"disable_offline_access_scope\": set to \"true\" if no offline tokens shall be issued. - \"extra_scopes\": a space-delimited string of additional scopes to request in addition to \"openid profile email\" according to https://www.rfc-editor.org/rfc/rfc6749.html#section-3.3 . OpenShift Auth: supports no extra configuration options. User PKI: - \"keys\": the trusted certificates PEM encoded. SAML: - \"sp_issuer\": the service provider issuer according to https://datatracker.ietf.org/doc/html/rfc7522#section-3 . - \"idp_metadata_url\": the metadata URL according to https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . - \"idp_issuer\": the IdP issuer. - \"idp_cert_pem\": the cert PEM encoded for the IdP endpoint. - \"idp_sso_url\": the IdP SSO URL. - \"idp_nameid_format\": the IdP name ID format. IAP: - \"audience\": the audience to use. loginUrl String The login URL will be provided by the backend, and may not be specified in a request. validated Boolean extraUiEndpoints List of string UI endpoints which to allow in addition to ui_endpoint . I.e., if a login request is coming from any of these, the auth request will use these for the callback URL, not ui_endpoint. active Boolean requiredAttributes List of AuthProviderRequiredAttribute traits StorageTraits claimMappings Map of string Specifies claims from IdP token that will be copied to Rox token attributes. Each key in this map contains a path in IdP token we want to map. Path is separated by \".\" symbol. For example, if IdP token payload looks like: { \"a\": { \"b\" : \"c\", \"d\": true, \"e\": [ \"val1\", \"val2\", \"val3\" ], \"f\": [ true, false, false ], \"g\": 123.0, \"h\": [ 1, 2, 3] } } then \"a.b\" would be a valid key and \"a.z\" is not. We support the following types of claims: * string(path \"a.b\") * bool(path \"a.d\") * string array(path \"a.e\") * bool array (path \"a.f.\") We do NOT support the following types of claims: * complex claims(path \"a\") * float/integer claims(path \"a.g\") * float/integer array claims(path \"a.h\") Each value in this map contains a Rox token attribute name we want to add claim to. If, for example, value is \"groups\", claim would be found in \"external_user.Attributes.groups\" in token. Note: we only support this feature for OIDC auth provider. lastUpdated Date Last updated indicates the last time the auth provider has been updated. In case there have been tokens issued by an auth provider before this timestamp, they will be considered invalid. Subsequently, all clients will have to re-issue their tokens (either by refreshing or by an additional login attempt). date-time 5.5.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 5.5.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 5.5.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 5.5.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 5.5.7.9. V1UpdateAuthProviderRequest Field Name Required Nullable Type Description Format id String name String enabled Boolean 5.6. PutAuthProvider PUT /v1/authProviders/{id} 5.6.1. Description 5.6.2. Parameters 5.6.2.1. Path Parameters Name Description Required Default Pattern id X null 5.6.2.2. Body Parameter Name Description Required Default Pattern body StorageAuthProvider X 5.6.3. Return Type StorageAuthProvider 5.6.4. Content Type application/json 5.6.5. Responses Table 5.6. HTTP Response Codes Code Message Datatype 200 A successful response. StorageAuthProvider 0 An unexpected error response. RuntimeError 5.6.6. Samples 5.6.7. Common object reference 5.6.7.1. AuthProviderRequiredAttribute RequiredAttribute allows to specify a set of attributes which ALL are required to be returned by the auth provider. If any attribute is missing within the external claims of the token issued by Central, the authentication request to this IdP is considered failed. Field Name Required Nullable Type Description Format attributeKey String attributeValue String 5.6.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.6.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.6.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.6.7.4. StorageAuthProvider Tag: 15. Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String enabled Boolean config Map of string Config holds auth provider specific configuration. Each configuration options are different based on the given auth provider type. OIDC: - \"issuer\": the OIDC issuer according to https://openid.net/specs/openid-connect-core-1_0.html#IssuerIdentifier . - \"client_id\": the client ID according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.2 . - \"client_secret\": the client secret according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.3.1 . - \"do_not_use_client_secret\": set to \"true\" if you want to create a configuration with only a client ID and no client secret. - \"mode\": the OIDC callback mode, choosing from \"fragment\", \"post\", or \"query\". - \"disable_offline_access_scope\": set to \"true\" if no offline tokens shall be issued. - \"extra_scopes\": a space-delimited string of additional scopes to request in addition to \"openid profile email\" according to https://www.rfc-editor.org/rfc/rfc6749.html#section-3.3 . OpenShift Auth: supports no extra configuration options. User PKI: - \"keys\": the trusted certificates PEM encoded. SAML: - \"sp_issuer\": the service provider issuer according to https://datatracker.ietf.org/doc/html/rfc7522#section-3 . - \"idp_metadata_url\": the metadata URL according to https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . - \"idp_issuer\": the IdP issuer. - \"idp_cert_pem\": the cert PEM encoded for the IdP endpoint. - \"idp_sso_url\": the IdP SSO URL. - \"idp_nameid_format\": the IdP name ID format. IAP: - \"audience\": the audience to use. loginUrl String The login URL will be provided by the backend, and may not be specified in a request. validated Boolean extraUiEndpoints List of string UI endpoints which to allow in addition to ui_endpoint . I.e., if a login request is coming from any of these, the auth request will use these for the callback URL, not ui_endpoint. active Boolean requiredAttributes List of AuthProviderRequiredAttribute traits StorageTraits claimMappings Map of string Specifies claims from IdP token that will be copied to Rox token attributes. Each key in this map contains a path in IdP token we want to map. Path is separated by \".\" symbol. For example, if IdP token payload looks like: { \"a\": { \"b\" : \"c\", \"d\": true, \"e\": [ \"val1\", \"val2\", \"val3\" ], \"f\": [ true, false, false ], \"g\": 123.0, \"h\": [ 1, 2, 3] } } then \"a.b\" would be a valid key and \"a.z\" is not. We support the following types of claims: * string(path \"a.b\") * bool(path \"a.d\") * string array(path \"a.e\") * bool array (path \"a.f.\") We do NOT support the following types of claims: * complex claims(path \"a\") * float/integer claims(path \"a.g\") * float/integer array claims(path \"a.h\") Each value in this map contains a Rox token attribute name we want to add claim to. If, for example, value is \"groups\", claim would be found in \"external_user.Attributes.groups\" in token. Note: we only support this feature for OIDC auth provider. lastUpdated Date Last updated indicates the last time the auth provider has been updated. In case there have been tokens issued by an auth provider before this timestamp, they will be considered invalid. Subsequently, all clients will have to re-issue their tokens (either by refreshing or by an additional login attempt). date-time 5.6.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 5.6.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 5.6.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 5.6.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 5.7. PostAuthProvider POST /v1/authProviders 5.7.1. Description 5.7.2. Parameters 5.7.2.1. Body Parameter Name Description Required Default Pattern body StorageAuthProvider X 5.7.3. Return Type StorageAuthProvider 5.7.4. Content Type application/json 5.7.5. Responses Table 5.7. HTTP Response Codes Code Message Datatype 200 A successful response. StorageAuthProvider 0 An unexpected error response. RuntimeError 5.7.6. Samples 5.7.7. Common object reference 5.7.7.1. AuthProviderRequiredAttribute RequiredAttribute allows to specify a set of attributes which ALL are required to be returned by the auth provider. If any attribute is missing within the external claims of the token issued by Central, the authentication request to this IdP is considered failed. Field Name Required Nullable Type Description Format attributeKey String attributeValue String 5.7.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.7.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.7.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.7.7.4. StorageAuthProvider Tag: 15. Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String enabled Boolean config Map of string Config holds auth provider specific configuration. Each configuration options are different based on the given auth provider type. OIDC: - \"issuer\": the OIDC issuer according to https://openid.net/specs/openid-connect-core-1_0.html#IssuerIdentifier . - \"client_id\": the client ID according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.2 . - \"client_secret\": the client secret according to https://www.rfc-editor.org/rfc/rfc6749.html#section-2.3.1 . - \"do_not_use_client_secret\": set to \"true\" if you want to create a configuration with only a client ID and no client secret. - \"mode\": the OIDC callback mode, choosing from \"fragment\", \"post\", or \"query\". - \"disable_offline_access_scope\": set to \"true\" if no offline tokens shall be issued. - \"extra_scopes\": a space-delimited string of additional scopes to request in addition to \"openid profile email\" according to https://www.rfc-editor.org/rfc/rfc6749.html#section-3.3 . OpenShift Auth: supports no extra configuration options. User PKI: - \"keys\": the trusted certificates PEM encoded. SAML: - \"sp_issuer\": the service provider issuer according to https://datatracker.ietf.org/doc/html/rfc7522#section-3 . - \"idp_metadata_url\": the metadata URL according to https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . - \"idp_issuer\": the IdP issuer. - \"idp_cert_pem\": the cert PEM encoded for the IdP endpoint. - \"idp_sso_url\": the IdP SSO URL. - \"idp_nameid_format\": the IdP name ID format. IAP: - \"audience\": the audience to use. loginUrl String The login URL will be provided by the backend, and may not be specified in a request. validated Boolean extraUiEndpoints List of string UI endpoints which to allow in addition to ui_endpoint . I.e., if a login request is coming from any of these, the auth request will use these for the callback URL, not ui_endpoint. active Boolean requiredAttributes List of AuthProviderRequiredAttribute traits StorageTraits claimMappings Map of string Specifies claims from IdP token that will be copied to Rox token attributes. Each key in this map contains a path in IdP token we want to map. Path is separated by \".\" symbol. For example, if IdP token payload looks like: { \"a\": { \"b\" : \"c\", \"d\": true, \"e\": [ \"val1\", \"val2\", \"val3\" ], \"f\": [ true, false, false ], \"g\": 123.0, \"h\": [ 1, 2, 3] } } then \"a.b\" would be a valid key and \"a.z\" is not. We support the following types of claims: * string(path \"a.b\") * bool(path \"a.d\") * string array(path \"a.e\") * bool array (path \"a.f.\") We do NOT support the following types of claims: * complex claims(path \"a\") * float/integer claims(path \"a.g\") * float/integer array claims(path \"a.h\") Each value in this map contains a Rox token attribute name we want to add claim to. If, for example, value is \"groups\", claim would be found in \"external_user.Attributes.groups\" in token. Note: we only support this feature for OIDC auth provider. lastUpdated Date Last updated indicates the last time the auth provider has been updated. In case there have been tokens issued by an auth provider before this timestamp, they will be considered invalid. Subsequently, all clients will have to re-issue their tokens (either by refreshing or by an additional login attempt). date-time 5.7.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 5.7.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 5.7.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 5.7.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 5.8. ListAvailableProviderTypes GET /v1/availableAuthProviders 5.8.1. Description 5.8.2. Parameters 5.8.3. Return Type V1AvailableProviderTypesResponse 5.8.4. Content Type application/json 5.8.5. Responses Table 5.8. HTTP Response Codes Code Message Datatype 200 A successful response. V1AvailableProviderTypesResponse 0 An unexpected error response. RuntimeError 5.8.6. Samples 5.8.7. Common object reference 5.8.7.1. AvailableProviderTypesResponseAuthProviderType Field Name Required Nullable Type Description Format type String suggestedAttributes List of string 5.8.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.8.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.8.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.8.7.4. V1AvailableProviderTypesResponse Field Name Required Nullable Type Description Format authProviderTypes List of AvailableProviderTypesResponseAuthProviderType 5.9. GetLoginAuthProviders GET /v1/login/authproviders 5.9.1. Description 5.9.2. Parameters 5.9.3. Return Type V1GetLoginAuthProvidersResponse 5.9.4. Content Type application/json 5.9.5. Responses Table 5.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetLoginAuthProvidersResponse 0 An unexpected error response. RuntimeError 5.9.6. Samples 5.9.7. Common object reference 5.9.7.1. GetLoginAuthProvidersResponseLoginAuthProvider Field Name Required Nullable Type Description Format id String name String type String loginUrl String 5.9.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 5.9.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 5.9.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 5.9.7.4. V1GetLoginAuthProvidersResponse Field Name Required Nullable Type Description Format authProviders List of GetLoginAuthProvidersResponseLoginAuthProvider
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 16", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/authproviderservice
Performing disaster recovery with Identity Management
Performing disaster recovery with Identity Management Red Hat Enterprise Linux 8 Recovering IdM after a server or data loss Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/performing_disaster_recovery_with_identity_management/index
function::print_regs
function::print_regs Name function::print_regs - Print a register dump Synopsis Arguments None Description This function prints a register dump. Does nothing if no registers are available for the probe point.
[ "print_regs()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-print-regs
Creating and managing images
Creating and managing images Red Hat OpenStack Platform 17.1 Create and manage images in Red Hat OpenStack Platform by using the Image service (glance) OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_images/index
Chapter 5. Hardware Enablement
Chapter 5. Hardware Enablement Hardware utility tools now correctly identify recently released hardware Prior to this update, obsolete ID files caused that recently released hardware connected to a computer was reported as unknown. To fix this bug, PCI, USB, and vendor device identification files have been updated. As a result, hardware utility tools now correctly identify recently released hardware. (BZ#1489294)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/bug_fixes_hardware_enablement
Authorization APIs
Authorization APIs OpenShift Container Platform 4.18 Reference guide for authorization APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/authorization_apis/index
Chapter 36. Manually Upgrading the Kernel
Chapter 36. Manually Upgrading the Kernel The Red Hat Enterprise Linux kernel is custom built by the Red Hat kernel team to ensure its integrity and compatibility with supported hardware. Before Red Hat releases a kernel, it must first pass a rigorous set of quality assurance tests. Red Hat Enterprise Linux kernels are packaged in RPM format so that they are easy to upgrade and verify using the Red Hat User Agent , or the up2date command. The Red Hat User Agent automatically queries the Red Hat Network servers and determines which packages need to be updated on your machine, including the kernel. This chapter is only useful for those individuals that require manual updating of kernel packages, without using the up2date command. Warning Please note, that building a custom kernel is not supported by the Red Hat Global Services Support team, and therefore is not explored in this manual. Note Use of up2date is highly recommended by Red Hat for installing upgraded kernels. For more information on Red Hat Network, the Red Hat User Agent , and up2date , refer to Chapter 16, Red Hat Network . 36.1. Overview of Kernel Packages Red Hat Enterprise Linux contains the following kernel packages (some may not apply to your architecture): kernel - Contains the kernel and the following key features: Uniprocessor support for x86 and Athlon systems (can be run on a multi-processor system, but only one processor is utilized) Multi-processor support for all other architectures For x86 systems, only the first 4 GB of RAM is used; use the kernel-hugemem package for x86 systems with over 4 GB of RAM kernel-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel package. kernel-hugemem - (only for i686 systems) In addition to the options enabled for the kernel package, the key configuration options are as follows: Support for more than 4 GB of RAM (up to 64 GB for x86) Note kernel-hugemem is required for memory configurations higher than 16 GB. PAE (Physical Address Extension) or 3 level paging on x86 processors that support PAE Support for multiple processors 4GB/4GB split - 4GB of virtual address space for the kernel and almost 4GB for each user process on x86 systems kernel-hugemem-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel-hugemem package. kernel-smp - Contains the kernel for multi-processor systems. The following are the key features: Multi-processor support Support for more than 4 GB of RAM (up to 16 GB for x86) PAE (Physical Address Extension) or 3 level paging on x86 processors that support PAE kernel-smp-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel-smp package. kernel-utils - Contains utilities that can be used to control the kernel or system hardware. kernel-doc - Contains documentation files from the kernel source. Various portions of the Linux kernel and the device drivers shipped with it are documented in these files. Installation of this package provides a reference to the options that can be passed to Linux kernel modules at load time. By default, these files are placed in the /usr/share/doc/kernel-doc- <version> / directory. Note The kernel-source package has been removed and replaced with an RPM that can only be retrieved from Red Hat Network. This *.src.rpm must then be rebuilt locally using the rpmbuild command. Refer to the latest distribution Release Notes, including all updates, at https://www.redhat.com/docs/manuals/enterprise/ for more information on obtaining and installing the kernel source package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/manually_upgrading_the_kernel
Chapter 3. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1]
Chapter 3. AdminPolicyBasedExternalRoute [k8s.ovn.org/v1] Description AdminPolicyBasedExternalRoute is a CRD allowing the cluster administrators to configure policies for external gateway IPs to be applied to all the pods contained in selected namespaces. Egress traffic from the pods that belong to the selected namespaces to outside the cluster is routed through these external gateway IPs. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute status object AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. 3.1.1. .spec Description AdminPolicyBasedExternalRouteSpec defines the desired state of AdminPolicyBasedExternalRoute Type object Required from nextHops Property Type Description from object From defines the selectors that will determine the target namespaces to this CR. nextHops object NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. 3.1.2. .spec.from Description From defines the selectors that will determine the target namespaces to this CR. Type object Required namespaceSelector Property Type Description namespaceSelector object NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR 3.1.3. .spec.from.namespaceSelector Description NamespaceSelector defines a selector to be used to determine which namespaces will be targeted by this CR Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.4. .spec.from.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.5. .spec.from.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.6. .spec.nextHops Description NextHops defines two types of hops: Static and Dynamic. Each hop defines at least one external gateway IP. Type object Property Type Description dynamic array DynamicHops defines a slices of DynamicHop. This field is optional. dynamic[] object DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. static array StaticHops defines a slice of StaticHop. This field is optional. static[] object StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. 3.1.7. .spec.nextHops.dynamic Description DynamicHops defines a slices of DynamicHop. This field is optional. Type array 3.1.8. .spec.nextHops.dynamic[] Description DynamicHop defines the configuration for a dynamic external gateway interface. These interfaces are wrapped around a pod object that resides inside the cluster. The field NetworkAttachmentName captures the name of the multus network name to use when retrieving the gateway IP to use. The PodSelector and the NamespaceSelector are mandatory fields. Type object Required namespaceSelector podSelector Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. namespaceSelector object NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. networkAttachmentName string NetworkAttachmentName determines the multus network name to use when retrieving the pod IPs that will be used as the gateway IP. When this field is empty, the logic assumes that the pod is configured with HostNetwork and is using the node's IP as gateway. podSelector object PodSelector defines the selector to filter the pods that are external gateways. 3.1.9. .spec.nextHops.dynamic[].namespaceSelector Description NamespaceSelector defines a selector to filter the namespaces where the pod gateways are located. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.10. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.11. .spec.nextHops.dynamic[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.12. .spec.nextHops.dynamic[].podSelector Description PodSelector defines the selector to filter the pods that are external gateways. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.13. .spec.nextHops.dynamic[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.14. .spec.nextHops.dynamic[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.15. .spec.nextHops.static Description StaticHops defines a slice of StaticHop. This field is optional. Type array 3.1.16. .spec.nextHops.static[] Description StaticHop defines the configuration of a static IP that acts as an external Gateway Interface. IP field is mandatory. Type object Required ip Property Type Description bfdEnabled boolean BFDEnabled determines if the interface implements the Bidirectional Forward Detection protocol. Defaults to false. ip string IP defines the static IP to be used for egress traffic. The IP can be either IPv4 or IPv6. 3.1.17. .status Description AdminPolicyBasedRouteStatus contains the observed status of the AdminPolicyBased route types. Type object Property Type Description lastTransitionTime string Captures the time when the last change was applied. messages array (string) An array of Human-readable messages indicating details about the status of the object. status string A concise indication of whether the AdminPolicyBasedRoute resource is applied with success 3.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes DELETE : delete collection of AdminPolicyBasedExternalRoute GET : list objects of kind AdminPolicyBasedExternalRoute POST : create an AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} DELETE : delete an AdminPolicyBasedExternalRoute GET : read the specified AdminPolicyBasedExternalRoute PATCH : partially update the specified AdminPolicyBasedExternalRoute PUT : replace the specified AdminPolicyBasedExternalRoute /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status GET : read status of the specified AdminPolicyBasedExternalRoute PATCH : partially update status of the specified AdminPolicyBasedExternalRoute PUT : replace status of the specified AdminPolicyBasedExternalRoute 3.2.1. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes HTTP method DELETE Description delete collection of AdminPolicyBasedExternalRoute Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AdminPolicyBasedExternalRoute Table 3.2. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRouteList schema 401 - Unauthorized Empty HTTP method POST Description create an AdminPolicyBasedExternalRoute Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 202 - Accepted AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 3.2.2. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute HTTP method DELETE Description delete an AdminPolicyBasedExternalRoute Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AdminPolicyBasedExternalRoute Table 3.9. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AdminPolicyBasedExternalRoute Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AdminPolicyBasedExternalRoute Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty 3.2.3. /apis/k8s.ovn.org/v1/adminpolicybasedexternalroutes/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the AdminPolicyBasedExternalRoute HTTP method GET Description read status of the specified AdminPolicyBasedExternalRoute Table 3.16. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AdminPolicyBasedExternalRoute Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AdminPolicyBasedExternalRoute Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body AdminPolicyBasedExternalRoute schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK AdminPolicyBasedExternalRoute schema 201 - Created AdminPolicyBasedExternalRoute schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/adminpolicybasedexternalroute-k8s-ovn-org-v1
Chapter 13. Managing container storage interface (CSI) component placements
Chapter 13. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes.
[ "oc edit configmap rook-ceph-operator-config -n openshift-storage", "oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml", "apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]", "oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>", "oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/managing-container-storage-interface-component-placements_rhodf
Chapter 8. Configuring network settings after installing OpenStack
Chapter 8. Configuring network settings after installing OpenStack You can configure network settings for an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation. 8.1. Configuring application access with floating IP addresses After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic. Note You do not need to perform this procedure if you provided values for platform.openstack.apiFloatingIP and platform.openstack.ingressFloatingIP in the install-config.yaml file, or os_api_fip and os_ingress_fip in the inventory.yaml playbook, during installation. The floating IP addresses are already set. Prerequisites OpenShift Container Platform cluster must be installed Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation. Procedure After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port: Show the port: USD openstack port show <cluster_name>-<cluster_ID>-ingress-port Attach the port to the IP address: USD openstack floating ip set --port <ingress_port_ID> <apps_FIP> Add a wildcard A record for *apps. to your DNS file: *.apps.<cluster_name>.<base_domain> IN A <apps_FIP> Note If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to /etc/hosts : <apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain> 8.2. Enabling OVS hardware offloading For clusters that run on Red Hat OpenStack Platform (RHOSP), you can enable Open vSwitch (OVS) hardware offloading. OVS is a multi-layer virtual switch that enables large-scale, multi-server network virtualization. Prerequisites You installed a cluster on RHOSP that is configured for single-root input/output virtualization (SR-IOV). You installed the SR-IOV Network Operator on your cluster. You created two hw-offload type virtual function (VF) interfaces on your cluster. Note Application layer gateway flows are broken in OpenShift Container Platform version 4.10, 4.11, and 4.12. Also, you cannot offload the application layer gateway flow for OpenShift Container Platform version 4.13. Procedure Create an SriovNetworkNodePolicy policy for the two hw-offload type VF interfaces that are on your cluster: The first virtual function interface apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload9" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload9" 1 Insert the SriovNetworkNodePolicy value here. 2 Both interfaces must include physical function (PF) names. The second virtual function interface apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload10" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload10" 1 Insert the SriovNetworkNodePolicy value here. 2 Both interfaces must include physical function (PF) names. Create NetworkAttachmentDefinition resources for the two interfaces: A NetworkAttachmentDefinition resource for the first interface apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","device":"ens6" }' A NetworkAttachmentDefinition resource for the second interface apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload10","type":"host-device","device":"ens5" }' Use the interfaces that you created with a pod. For example: A pod that uses the two OVS offload interfaces apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest 8.3. Attaching an OVS hardware offloading network You can attach an Open vSwitch (OVS) hardware offloading network to your cluster. Prerequisites Your cluster is installed and running. You provisioned an OVS hardware offloading network on Red Hat OpenStack Platform (RHOSP) to use with your cluster. Procedure Create a file named network.yaml from the following template: spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "hwoffload1", "type": "host-device","pciBusId": "0000:00:05.0", "ipam": {}}' 1 type: Raw where: pciBusId Specifies the device that is connected to the offloading network. If you do not have it, you can find this value by running the following command: USD oc describe SriovNetworkNodeState -n openshift-sriov-network-operator From a command line, enter the following command to patch your cluster with the file: USD oc apply -f network.yaml 8.4. Enabling IPv6 connectivity to pods on RHOSP To enable IPv6 connectivity between pods that have additional networks that are on different nodes, disable port security for the IPv6 port of the server. Disabling port security obviates the need to create allowed address pairs for each IPv6 address that is assigned to pods and enables traffic on the security group. Important Only the following IPv6 additional network configurations are supported: SLAAC and host-device SLAAC and MACVLAN DHCP stateless and host-device DHCP stateless and MACVLAN Procedure On a command line, enter the following command: USD openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1 1 1 Specify the IPv6 port of the compute server. Important This command removes security groups from the port and disables port security. Traffic restrictions are removed entirely from the port. 8.5. Create pods that have IPv6 connectivity on RHOSP After you enable IPv6 connectivty for pods and add it to them, create pods that have secondary IPv6 connections. Procedure Define pods that use your IPv6 namespace and the annotation k8s.v1.cni.cncf.io/networks: <additional_network_name> , where <additional_network_name is the name of the additional network. For example, as part of a Deployment object: apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080 Create the pod. For example, on a command line, enter the following command: USD oc create -f <ipv6_enabled_resource> 1 1 Specify the file that contains your resource definition. 8.6. Adding IPv6 connectivity to pods on RHOSP After you enable IPv6 connectivity in pods, add connectivity to them by using a Container Network Interface (CNI) configuration. Procedure To edit the Cluster Network Operator (CNO), enter the following command: USD oc edit networks.operator.openshift.io cluster Specify your CNI configuration under the spec field. For example, the following configuration uses a SLAAC address mode with MACVLAN: ... spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "ipv6", "type": "macvlan", "master": "ens4"}' 2 type: Raw 1 Be sure to create pods in the same namespace. 2 The interface in the network attachment "master" field can differ from "ens4" when more networks are configured or when a different kernel driver is used. Note If you are using stateful address mode, include the IP Address Management (IPAM) in the CNI configuration. DHCPv6 is not supported by Multus. Save your changes and quit the text editor to commit your changes. Verification On a command line, enter the following command: USD oc get network-attachment-definitions -A Example output NAMESPACE NAME AGE ipv6 ipv6 21h You can now create pods that have secondary IPv6 connections.
[ "openstack port show <cluster_name>-<cluster_ID>-ingress-port", "openstack floating ip set --port <ingress_port_ID> <apps_FIP>", "*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>", "<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload9\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload9\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload10\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload10\"", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload9\",\"type\":\"host-device\",\"device\":\"ens6\" }'", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload10\",\"type\":\"host-device\",\"device\":\"ens5\" }'", "apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest", "spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"hwoffload1\", \"type\": \"host-device\",\"pciBusId\": \"0000:00:05.0\", \"ipam\": {}}' 1 type: Raw", "oc describe SriovNetworkNodeState -n openshift-sriov-network-operator", "oc apply -f network.yaml", "openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1", "apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080", "oc create -f <ipv6_enabled_resource> 1", "oc edit networks.operator.openshift.io cluster", "spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipv6\", \"type\": \"macvlan\", \"master\": \"ens4\"}' 2 type: Raw", "oc get network-attachment-definitions -A", "NAMESPACE NAME AGE ipv6 ipv6 21h" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_openstack/installing-openstack-network-config
Chapter 30. KafkaJmxAuthenticationPassword schema reference
Chapter 30. KafkaJmxAuthenticationPassword schema reference Used in: KafkaJmxOptions The type property is a discriminator that distinguishes use of the KafkaJmxAuthenticationPassword type from other subtypes which may be added in the future. It must have the value password for the type KafkaJmxAuthenticationPassword . Property Property type Description type string Must be password .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkajmxauthenticationpassword-reference
Web console
Web console OpenShift Container Platform 4.14 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/web_console/index
Chapter 6. Management of OSDs using the Ceph Orchestrator
Chapter 6. Management of OSDs using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrators to manage OSDs of a Red Hat Ceph Storage cluster. 6.1. Ceph OSDs When a Red Hat Ceph Storage cluster is up and running, you can add OSDs to the storage cluster at runtime. A Ceph OSD generally consists of one ceph-osd daemon for one storage drive and its associated journal within a node. If a node has multiple storage drives, then map one ceph-osd daemon for each drive. Red Hat recommends checking the capacity of a cluster regularly to see if it is reaching the upper end of its storage capacity. As a storage cluster reaches its near full ratio, add one or more OSDs to expand the storage cluster's capacity. When you want to reduce the size of a Red Hat Ceph Storage cluster or replace the hardware, you can also remove an OSD at runtime. If the node has multiple storage drives, you might also need to remove one of the ceph-osd daemon for that drive. Generally, it's a good idea to check the capacity of the storage cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that the storage cluster is not at its near full ratio. Important Do not let a storage cluster reach the full ratio before adding an OSD. OSD failures that occur after the storage cluster reaches the near full ratio can cause the storage cluster to exceed the full ratio. Ceph blocks write access to protect the data until you resolve the storage capacity issues. Do not remove OSDs without considering the impact on the full ratio first. 6.2. Ceph OSD node configuration Configure Ceph OSDs and their supporting hardware similarly as a storage strategy for the pool(s) that will use the OSDs. Ceph prefers uniform hardware across pools for a consistent performance profile. For best performance, consider a CRUSH hierarchy with drives of the same type or size. If you add drives of dissimilar size, adjust their weights accordingly. When you add the OSD to the CRUSH map, consider the weight for the new OSD. Hard drive capacity grows approximately 40% per year, so newer OSD nodes might have larger hard drives than older nodes in the storage cluster, that is, they might have a greater weight. Before doing a new installation, review the Requirements for Installing Red Hat Ceph Storage chapter in the Installation Guide . 6.3. Automatically tuning OSD memory The OSD daemons adjust the memory consumption based on the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. If Red Hat Ceph Storage is deployed on dedicated nodes that do not share memory with other services, cephadm automatically adjusts the per-OSD consumption based on the total amount of RAM and the number of deployed OSDs. Important By default, the osd_memory_target_autotune parameter is set to true in Red Hat Ceph Storage 5.1. Syntax Once the storage cluster is upgraded to Red Hat Ceph Storage 5.0, for cluster maintenance such as addition of OSDs or replacement of OSDs, Red Hat recommends setting osd_memory_target_autotune parameter to true to autotune osd memory as per system memory. Cephadm starts with a fraction mgr/cephadm/autotune_memory_target_ratio , which defaults to 0.7 of the total RAM in the system, subtract off any memory consumed by non-autotuned daemons such as non-OSDS and for OSDs for which osd_memory_target_autotune is false, and then divide by the remaining OSDs. By default, autotune_memory_target_ratio is 0.2 for hyper-converged infrastructure and 0.7 for other environments. The osd_memory_target parameter is calculated as follows: Syntax SPACE_ALLOCATED_FOR_OTHER_DAEMONS may optionally include the following daemon space allocations: Alertmanager: 1 GB Grafana: 1 GB Ceph Manager: 4 GB Ceph Monitor: 2 GB Node-exporter: 1 GB Prometheus: 1 GB For example, if a node has 24 OSDs and has 251 GB RAM space, then osd_memory_target is 7860684936 . The final targets are reflected in the configuration database with options. You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column. Note In Red Hat Ceph Storage 5.1, the default setting of osd_memory_target_autotune true is unsuitable for hyperconverged infrastructures where compute and Ceph storage services are colocated. In a hyperconverged infrastructure, the autotune_memory_target_ratio can be set to 0.2 to reduce the memory consumption of Ceph. Example You can manually set a specific memory target for an OSD in the storage cluster. Example You can manually set a specific memory target for an OSD host in the storage cluster. Syntax Example Note Enabling osd_memory_target_autotune overwrites existing manual OSD memory target settings. To prevent daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled, set the _no_autotune_memory label on the host. Syntax You can exclude an OSD from memory autotuning by disabling the autotune option and setting a specific memory target. Example 6.4. Listing devices for Ceph OSD deployment You can check the list of available devices before deploying OSDs using the Ceph Orchestrator. The commands are used to print a list of devices discoverable by Cephadm. A storage device is considered available if all of the following conditions are met: The device must have no partitions. The device must not have any LVM state. The device must not be mounted. The device must not contain a file system. The device must not contain a Ceph BlueStore OSD. The device must be larger than 5 GB. Note Ceph will not provision an OSD on a device that is not available. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Using the --wide option provides all details relating to the device, including any reasons that the device might not be eligible for use as an OSD. This option does not support NVMe devices. Optional: To enable Health , Ident , and Fault fields in the output of ceph orch device ls , run the following commands: Note These fields are supported by libstoragemgmt library and currently supports SCSI, SAS, and SATA devices. As root user outside the Cephadm shell, check your hardware's compatibility with libstoragemgmt library to avoid unplanned interruption to services: Example In the output, you see the Health Status as Good with the respective SCSI VPD 0x83 ID. Note If you do not get this information, then enabling the fields might cause erratic behavior of devices. Log back into the Cephadm shell and enable libstoragemgmt support: Example Once this is enabled, ceph orch device ls gives the output of Health field as Good . Verification List the devices: Example 6.5. Zapping devices for Ceph OSD deployment You need to check the list of available devices before deploying OSDs. If there is no space available on the devices, you can clear the data on the devices by zapping them. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Clear the data of a device: Syntax Example Verification Verify the space is available on the device: Example You will see that the field under Available is Yes . Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide for more information. 6.6. Deploying Ceph OSDs on all available devices You can deploy all OSDS on all the available devices. Cephadm allows the Ceph Orchestrator to discover and deploy the OSDs on any available and unused storage device. To deploy OSDs all available devices, run the command without the unmanaged parameter and then re-run the command with the parameter to prevent from creating future OSDs. Note The deployment of OSDs with --all-available-devices is generally used for smaller clusters. For larger clusters, use the OSD specification file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Deploy OSDs on all available devices: Example The effect of ceph orch apply is persistent which means that the Orchestrator automatically finds the device, adds it to the cluster, and creates new OSDs. This occurs under the following conditions: New disks or drives are added to the system. Existing disks or drives are zapped. An OSD is removed and the devices are zapped. You can disable automatic creation of OSDs on all the available devices by using the --unmanaged parameter. Example Setting the parameter --unmanaged to true disables the creation of OSDs and also there is no change if you apply a new OSD service. Note The command ceph orch daemon add creates new OSDs, but does not add an OSD service. Verification List the service: Example View the details of the node and devices: Example Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.7. Deploying Ceph OSDs on specific devices and hosts You can deploy all the Ceph OSDs on specific devices and hosts using the Ceph Orchestrator. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Deploy OSDs on specific devices and hosts: Syntax Example To deploy ODSs on a raw physical device, without an LVM layer, use the --method raw option. Syntax Example Note If you have separate DB or WAL devices, the ratio of block to DB or WAL devices MUST be 1:1. Verification List the service: Example View the details of the node and devices: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.8. Advanced service specifications and filters for deploying OSDs Service Specification of type OSD is a way to describe a cluster layout using the properties of disks. It gives the user an abstract way to tell Ceph which disks should turn into an OSD with the required configuration without knowing the specifics of device names and paths. For each device and each host, define a yaml file or a json file. General settings for OSD specifications service_type : 'osd': This is mandatory to create OSDS service_id : Use the service name or identification you prefer. A set of OSDs is created using the specification file. This name is used to manage all the OSDs together and represent an Orchestrator service. placement : This is used to define the hosts on which the OSDs needs to be deployed. You can use on the following options: host_pattern : '*' - A host name pattern used to select hosts. label : 'osd_host' - A label used in the hosts where OSD needs to be deployed. hosts : 'host01', 'host02' - An explicit list of host names where OSDs needs to be deployed. selection of devices : The devices where OSDs are created. This allows to separate an OSD from different devices. You can create only BlueStore OSDs which have three components: OSD data: contains all the OSD data WAL: BlueStore internal journal or write-ahead Log DB: BlueStore internal metadata data_devices : Define the devices to deploy OSD. In this case, OSDs are created in a collocated schema. You can use filters to select devices and folders. wal_devices : Define the devices used for WAL OSDs. You can use filters to select devices and folders. db_devices : Define the devices for DB OSDs. You can use the filters to select devices and folders. encrypted : An optional parameter to encrypt information on the OSD which can set to either True or False unmanaged : An optional parameter, set to False by default. You can set it to True if you do not want the Orchestrator to manage the OSD service. block_wal_size : User-defined value, in bytes. block_db_size : User-defined value, in bytes. osds_per_device : User-defined value for deploying more than one OSD per device. method : An optional parameter to specify if an OSD is created with an LVM layer or not. Set to raw if you want to create OSDs on raw physical devices that do not include an LVM layer. If you have separate DB or WAL devices, the ratio of block to DB or WAL devices MUST be 1:1. Filters for specifying devices Filters are used in conjunction with the data_devices , wal_devices and db_devices parameters. Name of the filter Description Syntax Example Model Target specific disks. You can get details of the model by running lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT,SIZE,MODEL command or smartctl -i / DEVIVE_PATH Model: DISK_MODEL_NAME Model: MC-55-44-XZ Vendor Target specific disks Vendor: DISK_VENDOR_NAME Vendor: Vendor Cs Size Specification Includes disks of an exact size size: EXACT size: '10G' Size Specification Includes disks size of which is within the range size: LOW:HIGH size: '10G:40G' Size Specification Includes disks less than or equal to in size size: :HIGH size: ':10G' Size Specification Includes disks equal to or greater than in size size: LOW: size: '40G:' Rotational Rotational attribute of the disk. 1 matches all disks that are rotational and 0 matches all the disks that are non-rotational. If rotational =0, then OSD is configured with SSD or NVME. If rotational=1 then the OSD is configured with HDD. rotational: 0 or 1 rotational: 0 All Considers all the available disks all: true all: true Limiter When you specified valid filters but want to limit the amount of matching disks you can use the 'limit' directive. It should be used only as a last resort. limit: NUMBER limit: 2 Note To create an OSD with non-collocated components in the same host, you have to specify the different type of devices used and the devices should be on the same host. Note The devices used for deploying OSDs must be supported by libstoragemgmt . Additional Resources See the Deploying Ceph OSDs using the advanced specifications section in the Red Hat Ceph Storage Operations Guide . For more information on libstoragemgmt , see the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.9. Deploying Ceph OSDs using advanced service specifications The service specification of type OSD is a way to describe a cluster layout using the properties of disks. It gives the user an abstract way to tell Ceph which disks should turn into an OSD with the required configuration without knowing the specifics of device names and paths. You can deploy the OSD for each device and each host by defining a yaml file or a json file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure On the monitor node, create the osd_spec.yaml file: Example Edit the osd_spec.yaml file to include the following details: Syntax Simple scenarios: In these cases, all the nodes have the same set-up. Example Example Simple scenario: In this case, all the nodes have the same setup with OSD devices created in raw mode, without an LVM layer. Example Advanced scenario: This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated DB or WAL devices. The remaining SSDs are data_devices that have the NVMEs vendors assigned as dedicated DB or WAL devices. Example Advanced scenario with non-uniform nodes: This applies different OSD specs to different hosts depending on the host_pattern key. Example Advanced scenario with dedicated WAL and DB devices: Example Advanced scenario with multiple OSDs per device: Example For pre-created volumes, edit the osd_spec.yaml file to include the following details: Syntax Example For OSDs by ID, edit the osd_spec.yaml file to include the following details: Note This configuration is applicable for Red Hat Ceph Storage 5.3z1 and later releases. For earlier releases, use pre-created lvm. Syntax Example For OSDs by path, edit the osd_spec.yaml file to include the following details: Note This configuration is applicable for Red Hat Ceph Storage 5.3z1 and later releases. For earlier releases, use pre-created lvm. Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Before deploying OSDs, do a dry run: Note This step gives a preview of the deployment, without deploying the daemons. Example Deploy OSDs using service specification: Syntax Example Verification List the service: Example View the details of the node and devices: Example Additional Resources See the Advanced service specifications and filters for deploying OSDs section in the Red Hat Ceph Storage Operations Guide . 6.10. Removing the OSD daemons using the Ceph Orchestrator You can remove the OSD from a cluster by using Cephadm. Removing an OSD from a cluster involves two steps: Evacuates all placement groups (PGs) from the cluster. Removes the PG-free OSDs from the cluster. The --zap option removed the volume groups, logical volumes, and the LVM metadata. Note After removing OSDs, if the drives the OSDs were deployed on once again become available, cephadm` might automatically try to deploy more OSDs on these drives if they match an existing drivegroup specification. If you deployed the OSDs you are removing with a spec and do not want any new OSDs deployed on the drives after removal, modify the drivegroup specification before removal. While deploying OSDs, if you have used --all-available-devices option, set unmanaged: true to stop it from picking up new drives at all. For other deployments, modify the specification. See the Deploying Ceph OSDs using advanced service specifications for more details. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Ceph Monitor, Ceph Manager and Ceph OSD daemons are deployed on the storage cluster. Procedure Log into the Cephadm shell: Example Check the device and the node from which the OSD has to be removed: Example Remove the OSD: Syntax Example Note If you remove the OSD from the storage cluster without an option, such as --replace , the device is removed from the storage cluster completely. If you want to use the same device for deploying OSDs, you have to first zap the device before adding it to the storage cluster. Optional: To remove multiple OSDs from a specific node, run the following command: Syntax Example Check the status of the OSD removal: Example When no PGs are left on the OSD, it is decommissioned and removed from the cluster. Verification Verify the details of the devices and the nodes from which the Ceph OSDs are removed: Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide for more information. See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide for more information. See the Zapping devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide for more information on clearing space on devices. 6.11. Replacing the OSDs using the Ceph Orchestrator When disks fail, you can replace the physical storage device and reuse the same OSD ID to avoid having to reconfigure the CRUSH map. You can replace the OSDs from the cluster using the --replace option. Note If you want to replace a single OSD, see Deploying Ceph OSDs on specific devices and hosts . If you want to deploy OSDs on all available devices, see Deploying Ceph OSDs on all available devices . This option preserves the OSD ID using the ceph orch rm command. The OSD is not permanently removed from the CRUSH hierarchy, but is assigned the destroyed flag. This flag is used to determine the OSD IDs that can be reused in the OSD deployment. The destroyed flag is used to determine which OSD id is reused in the OSD deployment. Similar to rm command, replacing an OSD from a cluster involves two steps: Evacuating all placement groups (PGs) from the cluster. Removing the PG-free OSD from the cluster. If you use OSD specification for deployment, the OSD ID of the disk being replaced is automatically assigned to the newly added disk as soon as it is inserted. Note After removing OSDs, if the drives the OSDs were deployed on once again become available, cephadm` might automatically try to deploy more OSDs on these drives if they match an existing drivegroup specification. If you deployed the OSDs you are removing with a spec and do not want any new OSDs deployed on the drives after removal, modify the drivegroup specification before removal. While deploying OSDs, if you have used --all-available-devices option, set unmanaged: true to stop it from picking up new drives at all. For other deployments, modify the specification. See the Deploying Ceph OSDs using advanced service specifications for more details. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager, and OSD daemons are deployed on the storage cluster. A new OSD that replaces the removed OSD must be created on the same host from which the OSD was removed. Procedure Log into the Cephadm shell: Example Ensure to dump and save a mapping of your OSD configurations for future references: Example Check the device and the node from which the OSD has to be replaced: Example Remove the OSD from the cephadm managed cluster: Important If the storage cluster has health_warn or other errors associated with it, check and try to fix any errors before replacing the OSD to avoid data loss. Syntax The --force option can be used when there are ongoing operations on the storage cluster. Example Recreate the new OSD by applying the following OSD specification: Example Check the status of the OSD replacement: Example Stop the orchestrator to apply any existing OSD specification: Example Zap the OSD devices that have been removed: Example Resume the Orcestrator from pause mode Example Check the status of the OSD replacement: Example Verification Verify the details of the devices and the nodes from which the Ceph OSDs are replaced: Example You can see an OSD with the same id as the one you replaced running on the same host. Verify that the db_device for the new deployed OSDs is the replaced db_device: Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide for more information. See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide for more information. 6.12. Replacing the OSDs with pre-created LVM After purging the OSD with the ceph-volume lvm zap command, if the directory is not present, then you can replace the OSDs with the OSd service specification file with the pre-created LVM. Prerequisites A running Red Hat Ceph Storage cluster. Failed OSD Procedure Log into the Cephadm shell: Example Remove the OSD: Syntax Example Verify the OSD is destroyed: Example Zap and remove the OSD using the ceph-volume command: Syntax Example Check the OSD topology: Example Recreate the OSD with a specification file corresponding to that specific OSD topology: Example Apply the updated specification file: Example Verify the OSD is back: Example 6.13. Replacing the OSDs in a non-colocated scenario When the an OSD fails in a non-colocated scenario, you can replace the WAL/DB devices. The procedure is the same for DB and WAL devices. You need to edit the paths under db_devices for DB devices and paths under wal_devices for WAL devices. Prerequisites A running Red Hat Ceph Storage cluster. Daemons are non-colocated. Failed OSD Procedure Identify the devices in the cluster: Example Log into the Cephadm shell: Example Identify the OSDs and their DB device: Example In the osds.yaml file, set unmanaged parameter to true , else cephadm redeploys the OSDs: Example Apply the updated specification file: Example Check the status: Example Remove the OSDs. Ensure to use the --zap option to remove hte backend services and the --replace option to retain the OSD IDs: Example Check the status: Example Edit the osds.yaml specification file to change unmanaged parameter to false and replace the path to the DB device if it has changed after the device got physically replaced: Example In the above example, /dev/sdh is replaced with /dev/sde . Important If you use the same host specification file to replace the faulty DB device on a single OSD node, modify the host_pattern option to specify only the OSD node, else the deployment fails and you cannot find the new DB device on other hosts. Reapply the specification file with the --dry-run option to ensure the OSDs shall be deployed with the new DB device: Example Apply the specification file: Example Check the OSDs are redeployed: Example Verification From the OSD host where the OSDS are redeployed, verify if they are on the new DB device: Example 6.14. Stopping the removal of the OSDs using the Ceph Orchestrator You can stop the removal of only the OSDs that are queued for removal. This resets the initial state of the OSD and takes it off the removal queue. If the OSD is in the process of removal, then you cannot stop the process. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager and OSD daemons are deployed on the cluster. Remove OSD process initiated. Procedure Log into the Cephadm shell: Example Check the device and the node from which the OSD was initiated to be removed: Example Stop the removal of the queued OSD: Syntax Example Check the status of the OSD removal: Example Verification Verify the details of the devices and the nodes from which the Ceph OSDs were queued for removal: Example Additional Resources See Removing the OSD daemons using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 6.15. Activating the OSDs using the Ceph Orchestrator You can activate the OSDs in the cluster in cases where the operating system of the host was reinstalled. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager and OSD daemons are deployed on the storage cluster. Procedure Log into the Cephadm shell: Example After the operating system of the host is reinstalled, activate the OSDs: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 6.15.1. Observing the data migration When you add or remove an OSD to the CRUSH map, Ceph begins rebalancing the data by migrating placement groups to the new or existing OSD(s). You can observe the data migration using ceph-w command. Prerequisites A running Red Hat Ceph Storage cluster. Recently added or removed an OSD. Procedure To observe the data migration: Example Watch as the placement group states change from active+clean to active, some degraded objects , and finally active+clean when migration completes. To exit the utility, press Ctrl + C . 6.16. Recalculating the placement groups Placement groups (PGs) define the spread of any pool data across the available OSDs. A placement group is built upon the given redundancy algorithm to be used. For a 3-way replication, the redundancy is defined to use three different OSDs. For erasure-coded pools, the number of OSDs to use is defined by the number of chunks. When defining a pool the number of placement groups defines the grade of granularity the data is spread with across all available OSDs. The higher the number the better the equalization of capacity load can be. However, since handling the placement groups is also important in case of reconstruction of data, the number is significant to be carefully chosen upfront. To support calculation a tool is available to produce agile environments. During lifetime of a storage cluster a pool may grow above the initially anticipated limits. With the growing number of drives a recalculation is recommended. The number of placement groups per OSD should be around 100. When adding more OSDs to the storage cluster the number of PGs per OSD will lower over time. Starting with 120 drives initially in the storage cluster and setting the pg_num of the pool to 4000 will end up in 100 PGs per OSD, given with the replication factor of three. Over time, when growing to ten times the number of OSDs, the number of PGs per OSD will go down to ten only. Because small number of PGs per OSD will tend to an unevenly distributed capacity, consider adjusting the PGs per pool. Adjusting the number of placement groups can be done online. Recalculating is not only a recalculation of the PG numbers, but will involve data relocation, which will be a lengthy process. However, the data availability will be maintained at any time. Very high numbers of PGs per OSD should be avoided, because reconstruction of all PGs on a failed OSD will start at once. A high number of IOPS is required to perform reconstruction in a timely manner, which might not be available. This would lead to deep I/O queues and high latency rendering the storage cluster unusable or will result in long healing times. Additional Resources See the PG calculator for calculating the values by a given use case. See the Erasure Code Pools chapter in the Red Hat Ceph Storage Strategies Guide for more information.
[ "ceph config set osd osd_memory_target_autotune true", "osd_memory_target = TOTAL_RAM_OF_THE_OSD_NODE (in Bytes) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - ( SPACE_ALLOCATED_FOR_OTHER_DAEMONS (in Bytes))", "ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2", "ceph config set osd.123 osd_memory_target 7860684936", "ceph config set osd/host: HOSTNAME osd_memory_target TARGET_BYTES", "ceph config set osd/host:host01 osd_memory_target 1000000000", "ceph orch host label add HOSTNAME _no_autotune_memory", "ceph config set osd.123 osd_memory_target_autotune false ceph config set osd.123 osd_memory_target 16G", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "cephadm shell lsmcli ldl", "cephadm shell ceph config set mgr mgr/cephadm/device_enhanced_scan true", "ceph orch device ls", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch device zap HOSTNAME FILE_PATH --force", "ceph orch device zap host02 /dev/sdb --force", "ceph orch device ls", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch apply osd --all-available-devices", "ceph orch apply osd --all-available-devices --unmanaged=true", "ceph orch ls", "ceph osd tree", "cephadm shell", "ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch daemon add osd HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd host02:/dev/sdb", "ceph orch daemon add osd --method raw HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd --method raw host02:/dev/sdb", "ceph orch ls osd", "ceph osd tree", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=osd", "touch osd_spec.yaml", "service_type: osd service_id: SERVICE_ID placement: host_pattern: '*' # optional data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH osds_per_device: NUMBER_OF_DEVICES # optional db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH encrypted: true", "service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: all: true paths: - /dev/sdb encrypted: true", "service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: size: '80G' db_devices: size: '40G:' paths: - /dev/sdc", "service_type: osd service_id: all-available-devices encrypted: \"true\" method: raw placement: host_pattern: \"*\" data_devices: all: \"true\"", "service_type: osd service_id: osd_spec_hdd placement: host_pattern: '*' data_devices: rotational: 0 db_devices: model: Model-name limit: 2 --- service_type: osd service_id: osd_spec_ssd placement: host_pattern: '*' data_devices: model: Model-name db_devices: vendor: Vendor-name", "service_type: osd service_id: osd_spec_node_one_to_five placement: host_pattern: 'node[1-5]' data_devices: rotational: 1 db_devices: rotational: 0 --- service_type: osd service_id: osd_spec_six_to_ten placement: host_pattern: 'node[6-10]' data_devices: model: Model-name db_devices: model: Model-name", "service_type: osd service_id: osd_using_paths placement: hosts: - host01 - host02 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc wal_devices: paths: - /dev/sdd", "service_type: osd service_id: multiple_osds placement: hosts: - host01 - host02 osds_per_device: 4 data_devices: paths: - /dev/sdb", "service_type: osd service_id: SERVICE_ID placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_spec placement: hosts: - machine1 data_devices: paths: - /dev/vg_hdd/lv_hdd db_devices: paths: - /dev/vg_nvme/lv_nvme", "service_type: osd service_id: OSD_BY_ID_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_by_id_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 db_devices: paths: - /dev/disk/by-id/nvme-nvme.1b36-31323334-51454d55204e564d65204374726c-00000001", "service_type: osd service_id: OSD_BY_PATH_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH", "service_type: osd service_id: osd_by_path_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-path/pci-0000:0d:00.0-scsi-0:0:0:4 db_devices: paths: - /dev/disk/by-path/pci-0000:00:02.0-nvme-1", "cephadm shell --mount osd_spec.yaml:/var/lib/ceph/osd/osd_spec.yaml", "cd /var/lib/ceph/osd/", "ceph orch apply -i osd_spec.yaml --dry-run", "ceph orch apply -i FILE_NAME .yml", "ceph orch apply -i osd_spec.yaml", "ceph orch ls osd", "ceph osd tree", "cephadm shell", "ceph osd tree", "ceph orch osd rm OSD_ID [--replace] [--force] --zap", "ceph orch osd rm 0 --zap", "ceph orch osd rm OSD_ID OSD_ID --zap", "ceph orch osd rm 2 5 --zap", "ceph orch osd rm status OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT 9 host01 done, waiting for purge 0 False False True 2023-06-06 17:50:50.525690 10 host03 done, waiting for purge 0 False False True 2023-06-06 17:49:38.731533 11 host02 done, waiting for purge 0 False False True 2023-06-06 17:48:36.641105", "ceph osd tree", "cephadm shell", "ceph osd metadata -f plain | grep device_paths \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdi=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdf=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdg=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdh=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdk=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdl=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdj=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdm=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", [.. output omitted ..]", "ceph osd tree", "ceph orch osd rm OSD_ID --replace [--force]", "ceph orch osd rm 0 --replace", "service_type: osd service_id: osd placement: hosts: - myhost data_devices: paths: - /path/to/the/device", "ceph orch osd rm status", "ceph orch pause ceph orch status Backend: cephadm Available: Yes Paused: Yes", "ceph orch device zap node.example.com /dev/sdi --force zap successful for /dev/sdi on node.example.com ceph orch device zap node.example.com /dev/sdf --force zap successful for /dev/sdf on node.example.com", "ceph orch resume", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.77112 root default -3 0.77112 host node 0 hdd 0.09639 osd.0 up 1.00000 1.00000 1 hdd 0.09639 osd.1 up 1.00000 1.00000 2 hdd 0.09639 osd.2 up 1.00000 1.00000 3 hdd 0.09639 osd.3 up 1.00000 1.00000 4 hdd 0.09639 osd.4 up 1.00000 1.00000 5 hdd 0.09639 osd.5 up 1.00000 1.00000 6 hdd 0.09639 osd.6 up 1.00000 1.00000 7 hdd 0.09639 osd.7 up 1.00000 1.00000 [.. output omitted ..]", "ceph osd tree", "ceph osd metadata 0 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\", ceph osd metadata 1 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\",", "cephadm shell", "ceph orch osd rm OSD_ID [--replace]", "ceph orch osd rm 8 --replace Scheduled OSD(s) for removal", "ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.32297 root default -9 0.05177 host host10 3 hdd 0.01520 osd.3 up 1.00000 1.00000 13 hdd 0.02489 osd.13 up 1.00000 1.00000 17 hdd 0.01169 osd.17 up 1.00000 1.00000 -13 0.05177 host host11 2 hdd 0.01520 osd.2 up 1.00000 1.00000 15 hdd 0.02489 osd.15 up 1.00000 1.00000 19 hdd 0.01169 osd.19 up 1.00000 1.00000 -7 0.05835 host host12 20 hdd 0.01459 osd.20 up 1.00000 1.00000 21 hdd 0.01459 osd.21 up 1.00000 1.00000 22 hdd 0.01459 osd.22 up 1.00000 1.00000 23 hdd 0.01459 osd.23 up 1.00000 1.00000 -5 0.03827 host host04 1 hdd 0.01169 osd.1 up 1.00000 1.00000 6 hdd 0.01129 osd.6 up 1.00000 1.00000 7 hdd 0.00749 osd.7 up 1.00000 1.00000 9 hdd 0.00780 osd.9 up 1.00000 1.00000 -3 0.03816 host host05 0 hdd 0.01169 osd.0 up 1.00000 1.00000 8 hdd 0.01129 osd.8 destroyed 0 1.00000 12 hdd 0.00749 osd.12 up 1.00000 1.00000 16 hdd 0.00769 osd.16 up 1.00000 1.00000 -15 0.04237 host host06 5 hdd 0.01239 osd.5 up 1.00000 1.00000 10 hdd 0.01540 osd.10 up 1.00000 1.00000 11 hdd 0.01459 osd.11 up 1.00000 1.00000 -11 0.04227 host host07 4 hdd 0.01239 osd.4 up 1.00000 1.00000 14 hdd 0.01529 osd.14 up 1.00000 1.00000 18 hdd 0.01459 osd.18 up 1.00000 1.00000", "ceph-volume lvm zap --osd-id OSD_ID", "ceph-volume lvm zap --osd-id 8 Zapping: /dev/vg1/data-lv2 Closing encrypted path /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/sbin/cryptsetup remove /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/bin/dd if=/dev/zero of=/dev/vg1/data-lv2 bs=1M count=10 conv=fsync stderr: 10+0 records in 10+0 records out stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.034742 s, 302 MB/s Zapping successful for OSD: 8", "ceph-volume lvm list", "cat osd.yml service_type: osd service_id: osd_service placement: hosts: - host03 data_devices: paths: - /dev/vg1/data-lv2 db_devices: paths: - /dev/vg1/db-lv1", "ceph orch apply -i osd.yml Scheduled osd.osd_service update", "ceph -s ceph osd tree", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel-root 253:0 0 17G 0 lvm / └─rhel-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--5726d3e9--4fdb--4eda--b56a--3e0df88d663f-osd--block--3ceb89ec--87ef--46b4--99c6--2a56bac09ff0 253:2 0 10G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--d7c9ab50--f5c0--4be0--a8fd--e0313115f65c-osd--block--37c370df--1263--487f--a476--08e28bdbcd3c 253:4 0 10G 0 lvm sdd 8:48 0 10G 0 disk ├─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--31b20150--4cbc--4c2c--9c8f--6f624f3bfd89 253:7 0 2.5G 0 lvm └─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--1bee5101--dbab--4155--a02c--e5a747d38a56 253:9 0 2.5G 0 lvm sde 8:64 0 10G 0 disk sdf 8:80 0 10G 0 disk └─ceph--412ee99b--4303--4199--930a--0d976e1599a2-osd--block--3a99af02--7c73--4236--9879--1fad1fe6203d 253:6 0 10G 0 lvm sdg 8:96 0 10G 0 disk └─ceph--316ca066--aeb6--46e1--8c57--f12f279467b4-osd--block--58475365--51e7--42f2--9681--e0c921947ae6 253:8 0 10G 0 lvm sdh 8:112 0 10G 0 disk ├─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--0dfe6eca--ba58--438a--9510--d96e6814d853 253:3 0 5G 0 lvm └─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--26b70c30--8817--45de--8843--4c0932ad2429 253:5 0 5G 0 lvm sr0", "cephadm shell", "ceph-volume lvm list /dev/sdh ====== osd.2 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 block device /dev/ceph-5726d3e9-4fdb-4eda-b56a-3e0df88d663f/osd-block-3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 block uuid GkWLoo-f0jd-Apj2-Zmwj-ce0h-OY6J-UuW8aD cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 db uuid 6gSPoc-L39h-afN3-rDl6-kozT-AX9S-XR20xM encrypted 0 osd fsid 3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh ====== osd.5 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 block device /dev/ceph-d7c9ab50-f5c0-4be0-a8fd-e0313115f65c/osd-block-37c370df-1263-487f-a476-08e28bdbcd3c block uuid Eay3I7-fcz5-AWvp-kRcI-mJaH-n03V-Zr0wmJ cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 db uuid mwSohP-u72r-DHcT-BPka-piwA-lSwx-w24N0M encrypted 0 osd fsid 37c370df-1263-487f-a476-08e28bdbcd3c osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh", "cat osds.yml service_type: osd service_id: non-colocated unmanaged: true placement: host_pattern: 'ceph*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sdh", "ceph orch apply -i osds.yml Scheduled osd.non-colocated update", "ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 9m ago 4d count:1 crash 3/4 4d ago 4d * grafana ?:3000 1/1 9m ago 4d count:1 mgr 1/2 4d ago 4d count:2 mon 3/5 4d ago 4d count:5 node-exporter ?:9100 3/4 4d ago 4d * osd.non-colocated 8 4d ago 5s <unmanaged> prometheus ?:9095 1/1 9m ago 4d count:1", "ceph orch osd rm 2 5 --zap --replace Scheduled OSD(s) for removal", "ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.1 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 996 KiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.0 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.5", "cat osds.yml service_type: osd service_id: non-colocated unmanaged: false placement: host_pattern: 'ceph01*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sde", "ceph orch apply -i osds.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any of these conditions change, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+-------+-------+----------+----------+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+-------+-------+----------+----------+-----+ |osd |non-colocated |host02 |/dev/sdb |/dev/sde |- | |osd |non-colocated |host02 |/dev/sdc |/dev/sde |- | +---------+-------+-------+----------+----------+-----+", "ceph orch apply -i osds.yml Scheduled osd.non-colocated update", "ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.5 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.5", "ceph-volume lvm list /dev/sde ====== osd.2 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 block device /dev/ceph-a4afcb78-c804-4daf-b78f-3c7ad1ed0379/osd-block-564b3d2f-0f85-4289-899a-9f98a2641979 block uuid ITPVPa-CCQ5-BbFa-FZCn-FeYt-c5N4-ssdU41 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 db uuid HF1bYb-fTK7-0dcB-CHzW-xvNn-dCym-KKdU5e encrypted 0 osd fsid 564b3d2f-0f85-4289-899a-9f98a2641979 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sde ====== osd.5 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd block device /dev/ceph-b37c8310-77f9-4163-964b-f17b4c29c537/osd-block-b42a4f1f-8e19-4416-a874-6ff5d305d97f block uuid 0LuPoz-ao7S-UL2t-BDIs-C9pl-ct8J-xh5ep4 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd db uuid SvmXms-iWkj-MTG7-VnJj-r5Mo-Moiw-MsbqVD encrypted 0 osd fsid b42a4f1f-8e19-4416-a874-6ff5d305d97f osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sde", "cephadm shell", "ceph osd tree", "ceph orch osd rm stop OSD_ID", "ceph orch osd rm stop 0", "ceph orch osd rm status", "ceph osd tree", "cephadm shell", "ceph cephadm osd activate HOSTNAME", "ceph cephadm osd activate host03", "ceph orch ls", "ceph orch ps --service_name= SERVICE_NAME", "ceph orch ps --service_name=osd", "ceph -w" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/management-of-osds-using-the-ceph-orchestrator
Chapter 2. Configuring System Authentication
Chapter 2. Configuring System Authentication Authentication is the process in which a user is identified and verified to a system. It requires presenting some sort of identity and credentials, such as a user name and password. The system then compares the credentials against the configured authentication service. If the credentials match and the user account is active, then the user is authenticated . Once a user is authenticated, the information is passed to the access control service to determine what the user is permitted to do. Those are the resources the user is authorized to access. Note that authentication and authorization are two separate processes. The system must have a configured list of valid account databases for it to check for user authentication. The information to verify the user can be located on the local system or the local system can reference a user database on a remote system, such as LDAP or Kerberos. A local system can use a variety of different data stores for user information, including Lightweight Directory Access Protocol (LDAP), Network Information Service (NIS), and Winbind. Both LDAP and NIS data stores can use Kerberos to authenticate users. For convenience and potentially part of single sign-on, Red Hat Enterprise Linux can use the System Security Services Daemon (SSSD) as a central daemon to authenticate the user to different identity back ends or even to ask for a ticket-granting ticket (TGT) for the user. SSSD can interact with LDAP, Kerberos, and external applications to verify user credentials. This chapter explains what tools are available in Red Hat Enterprise Linux for configuring system authentication: the ipa-client-install utility and the realmd system for Identity Management systems; see Section 2.1, "Identity Management Tools for System Authentication" for more information the authconfig utility and the authconfig UI for other systems; see Section 2.2, "Using authconfig " for more information 2.1. Identity Management Tools for System Authentication You can use the ipa-client-install utility and the realmd system to automatically configure system authentication on Identity Management machines. ipa-client-install The ipa-client-install utility configures a system to join the Identity Management domain as a client machine. For more information about ipa-client-install , see the Installing a Client in the Linux Domain Identity, Authentication, and Policy Guide . Note that for Identity Management systems, ipa-client-install is preferred over realmd . realmd The realmd system joins a machine to an identity domain, such as an Identity Management or Active Directory domain. For more information about realmd , see the Using realmd to Connect to an Active Directory Domain section in the Windows Integration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/configuring_authentication
Chapter 5. Deploying a Red Hat Enterprise Linux image as a Google Compute Engine instance on Google Cloud Platform
Chapter 5. Deploying a Red Hat Enterprise Linux image as a Google Compute Engine instance on Google Cloud Platform You have a number of options for deploying a Red Hat Enterprise Linux (RHEL) 7 image as a Google Compute Engine (GCE) instance on Google Cloud Platform (GCP). This chapter discusses your options for choosing an image and lists or refers to system requirements for your host system and VM. The chapter provides procedures for creating a custom VM from an ISO image, uploading to GCE, and launching an instance. This chapter refers to the Google documentation in a number of places. For many procedures, see the referenced Google documentation for additional detail. Note For a list of Red Hat product certifications for GCP, see Red Hat on Google Cloud Platform . Prerequisites You need a Red Hat Customer Portal account to complete the procedures in this chapter. Create an account with GCP to access the Google Cloud Platform Console. See Google Cloud for more information. Enable your Red Hat subscriptions through the Red Hat Cloud Access program . The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto GCP with full support from Red Hat. Additional resources Red Hat in the Public Cloud Google Cloud 5.1. Red Hat Enterprise Linux image options on GCP The following table lists image choices and the differences in the image options. Table 5.1. Image options Image option Subscriptions Sample scenario Considerations Choose to deploy a custom image that you move to GCP. Leverage your existing Red Hat subscriptions. Enable subscriptions through the Red Hat Cloud Access program , upload your custom image, and attach your subscriptions. The subscription includes the Red Hat product cost; you pay all other instance costs. Custom images that you move to GCP are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images. Choose to deploy an existing GCP image that includes RHEL. The GCP images include a Red Hat product. Choose a RHEL image when you launch an instance on the GCP Compute Engine , or choose an image from the Google Cloud Platform Marketplace . You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. Important You cannot convert an on-demand instance to a Red Hat Cloud Access instance. To change from an on-demand image to a Red Hat Cloud Access bring-your-own subscription (BYOS) image, create a new Red Hat Cloud Access instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing. The remainder of this chapter includes information and procedures pertaining to custom images. Additional resources Red Hat in the Public Cloud Images Red Hat Cloud Access Reference Guide Creating an instance from a custom image 5.2. Understanding base images This section includes information on using preconfigured base images and their configuration settings. 5.2.1. Using a custom base image To manually configure a VM, you start with a base (starter) VM image. Once you have created the base VM image, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image. Additional resources Red Hat Enterprise Linux 5.2.2. Virtual machine configuration settings Cloud VMs must have the following configuration settings. Table 5.2. VM configuration settings Setting Recommendation ssh ssh must be enabled to provide remote access to your VMs. dhcp The primary virtual adapter should be configured for dhcp. 5.3. Creating a base VM from an ISO image Follow the procedures in this section to create a base image from an ISO image. Prerequisites Enable virtualization for your Red Hat Enterprise Linux 7 host machine by following the Virtualization Deployment and Administration Guide . 5.3.1. Downloading the ISO image Procedure Download the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal . Move the image to the /var/lib/libvirt/images directory. 5.3.2. Creating a VM from an ISO image Procedure Ensure that you have enabled your host machine for virtualization. For information and procedures to install virutalization packages, see Installing virtualization packages on an existing Red Hat Enterprise Linux system Create and start a basic Red Hat Enterprise Linux VM. For instructions to create VM, refer to Creating a virtual machine . If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio . A basic command line sample follows. If you use the virt-manager application to create your VM, follow the procedure in Creating guests with virt-manager , with these caveats: Do not check Immediately Start VM . Change your Memory and Storage Size to your preferred settings. Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM. 5.3.3. Completing the RHEL installation Perform the following steps to complete the installation and to enable root access once the VM launches. Procedure Choose the language you want to use during the installation process. On the Installation Summary view: Click Software Selection and check Minimal Install . Click Done . Click Installation Destination and check Custom under Storage Configuration . Verify at least 500 MB for /boot . You can use the remaining space for root / . Standard partitions are recommended, but you can use Logical Volume Management (LVM). You can use xfs, ext4, or ext3 for the file system. Click Done when you are finished with changes. Click Begin Installation . Set a Root Password . Reboot the VM and log in as root once the installation completes. Configure the image. Note Ensure that the cloud-init package is installed and enabled. Power down the VM. 5.4. Uploading the RHEL image to GCP Follow the procedures in this section on your host machine to upload your image to GCP. 5.4.1. Creating a new project on GCP Complete the following steps to create a new project on GCP. Prerequisites You must have created an account with GCP. If you have not, see Google Cloud for more information. Procedure Launch the GCP Console . Click the drop-down menu to the right of Google Cloud Platform . From the pop-up menu, click NEW PROJECT . From the New Project window, enter a name for your new project. Check the Organization . Click the drop-down menu to change the organization, if necessary. Confirm the Location of your parent organization or folder. Click Browse to search for and change this value, if necessary. Click CREATE to create your new GCP project. Note Once you have installed the Google Cloud SDK, you can use the gcloud projects create CLI command to create a project. A simple example follows. The example creates a project with the project ID my-gcp-project3 and the project name project3 . See gcloud project create for more information. Additional resources Creating and Managing Resources 5.4.2. Installing the Google Cloud SDK Complete the following steps to install the Google Cloud SDK. Prerequisites Create a project on the GCP if you have not already done so. See Creating a new project on the Google Cloud Platform for more information. Ensure that your host system includes Python 2.7 or later. If it does not, install Python 2.7. Procedure Follow the GCP instructions for downloading and extracting the Google Cloud SDK archive. See the GCP document Quickstart for Linux for details. Follow the same instructions for initializing the Google Cloud SDK. Note Once you have initialized the Google Cloud SDK, you can use the gcloud CLI commands to perform tasks and obtain information about your project and instances. For example, you can display project information with the gcloud compute project-info describe --project <project-name> command. Additional resources Quickstart for Linux gcloud command reference gcloud command-line tool overview 5.4.3. Creating SSH keys for Google Compute Engine Perform the following procedure to generate and register SSH keys with GCE so that you can SSH directly into an instance using its public IP address. Procedure Use the ssh-keygen command to generate an SSH key pair for use with GCE. From the GCP Console Dashboard page , click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select Metadata . Click SSH Keys and then click Edit . Enter the output generated from the ~/.ssh/google_compute_engine.pub file and click Save . You can now connect to your instance using standard SSH. Note You can run the gcloud compute config-ssh command to populate your config file with aliases for your instances. The aliases allow simple SSH connections by instance name. For information on the gcloud compute config-ssh command, see gcloud compute config-ssh . Additional resources gcloud compute config-ssh Connecting to instances 5.4.4. Creating a storage bucket in GCP Storage Importing to GCP requires a GCP Storage Bucket. Complete the following steps to create a bucket. Procedure If you are not already logged in to GCP, log in with the following command. Create a storage bucket. Note Alternatively, you can use the Google Cloud Console to create a bucket. See Create a bucket for information. Additional resources Create a bucket 5.4.5. Converting and uploading your image to your GCP Bucket Complete the following procedure to convert and upload your image to your GCP Bucket. The samples are representative; they convert a qcow2 image to raw format and then tar that image for upload. Procedure Run the qemu-img command to convert your image. The converted image must have the name disk.raw . Tar the image. Upload the image to the bucket you created previously. Upload could take a few minutes. Verification steps From the Google Cloud Platform home screen, click the collapsed menu icon and select Storage and then select Browser . Click the name of your bucket. The tarred image is listed under your bucket name. Note You can also upload your image using the GCP Console . To do so, click the name of your bucket and then click Upload files . Additional resources Manually importing virtual disks Choosing an import method 5.4.6. Creating an image from the object in the GCP bucket Perform the following procedure to create an image from the object in your GCP bucket. Procedure Run the following command to create an image for GCE. Specify the name of the image you are creating, the bucket name, and the name of the tarred image. Note Alternatively, you can use the Google Cloud Console to create an image. See Creating, deleting, and deprecating custom images for information. Optionally, find the image in the GCP Console. Click the Navigation menu to the left of the Google Cloud Console banner. Select Compute Engine and then Images . Additional resources Creating, deleting, and deprecating custom images gcloud compute images create 5.4.7. Creating a Google Compute Engine instance from an image Complete the following steps to configure a GCE VM instance using the GCP Console. Note The following procedure provides instructions for creating a basic VM instance using the GCP Console. See Creating and starting a VM instance for more information on GCE VM instances and their configuration options. Procedure From the GCP Console Dashboard page , click the Navigation menu to the left of the Google Cloud Console banner , select Compute Engine , and then select Images . Select your image. Click Create Instance . On the Create an instance page, enter a Name for your instance. Choose a Region and Zone . Choose a Machine configuration that meets or exceeds the requirements of your workload. Ensure that Boot disk specifies the name of your image. Optionally, under Firewall , select Allow HTTP traffic or Allow HTTPS traffic . Click Create . Note These are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements. Find your image under VM instances . From the GCP Console Dashboard, click the Navigation menu to the left of the Google Cloud Console banner , select Compute Engine , and then select VM instances . Note Alternatively, you can use the gcloud compute instances create CLI command to create a GCE VM instance from an image. A simple example follows. The example creates a VM instance named myinstance3 in zone us-central1-a based upon the existing image test-iso2-image . See gcloud compute instances create for more information. 5.4.8. Connecting to your instance Perform the following procedure to connect to your GCE instance using its public IP address. Procedure Run the following command to ensure that your instance is running. The command lists information about your GCE instance, including whether the instance is running, and, if so, the public IP address of the running instance. Connect to your instance using standard SSH. The example uses the google_compute_engine key created earlier. Note GCP offers a number of ways to SSH into your instance. See Connecting to instances for more information. Additional resources gcloud compute instances list Connecting to instances 5.4.9. Attaching Red Hat subscriptions Complete the following steps to attach the subscriptions you previously enabled through the Red Hat Cloud Access program. Prerequisites You must have enabled your subscriptions. Procedure Register your system. Attach your subscriptions. You can use an activation key to attach subscriptions. Refer to Creating Red Hat Customer Portal Activation Keys . Alternatively, you can manually attach a subscription using the ID of the subscription pool (Pool ID). Refer to Attaching and Removing Subscriptions Through the Command Line . Additional resources Creating Red Hat Customer Portal Activation Keys Attaching and Removing Subscriptions Through the Command Line Using and Configuring Red Hat Subscription Manager
[ "virt-install --name _vmname_ --memory 2048 --vcpus 2 --disk size=8,bus=virtio --location rhel-7.0-x86_64-dvd.iso --os-variant=rhel7.0", "gcloud projects create my-gcp-project3 --name project3", "ssh-keygen -t rsa -f ~/.ssh/google_compute_engine", "ssh -i ~/.ssh/google_compute_engine <username>@<instance_external_ip>", "gcloud auth login", "gsutil mb gs://bucket_name", "qemu-img convert -f qcow2 -O raw rhel-sample.qcow2 disk.raw", "tar --format=oldgnu -Sczf disk.raw.tar.gz disk.raw", "gsutil cp disk.raw.tar.gz gs://bucket_name", "gcloud compute images create my-image-name --source-uri gs://my-bucket-name/disk.raw.tar.gz", "gcloud compute instances create myinstance3 --zone=us-central1-a --image test-iso2-image", "gcloud compute instances list", "ssh -i ~/.ssh/google_compute_engine <user_name>@<instance_external_ip>", "subscription-manager register --auto-attach" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/assembly_deploying-a-rhel-image-on-gcp_cloud-content
Chapter 5. Configuring PCI passthrough
Chapter 5. Configuring PCI passthrough You can use PCI passthrough to attach a physical PCI device, such as a graphics card or a network device, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host. Important Using PCI passthrough with routed provider networks The Compute service does not support single networks that span multiple provider networks. When a network contains multiple physical networks, the Compute service only uses the first physical network. Therefore, if you are using routed provider networks you must use the same physical_network name across all the Compute nodes. If you use routed provider networks with VLAN or flat networks, you must use the same physical_network name for all segments. You then create multiple segments for the network and map the segments to the appropriate subnets. To enable your cloud users to create instances with PCI devices attached, you must complete the following: Designate Compute nodes for PCI passthrough. Configure the Compute nodes for PCI passthrough that have the required PCI devices. Deploy the overcloud. Create a flavor for launching instances with PCI devices attached. Prerequisites The Compute nodes have the required PCI devices. 5.1. Designating Compute nodes for PCI passthrough To designate Compute nodes for instances with physical PCI devices attached, you must create a new role file to configure the PCI passthrough role, and configure a new overcloud flavor and PCI passthrough resource class to use to tag the Compute nodes for PCI passthrough. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_pci_passthrough.yaml that includes the Controller , Compute , and ComputePCI roles: Open roles_data_pci_passthrough.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputePCI Role name name: Compute name: ComputePCI description Basic Compute Node role PCI Passthrough Compute Node role HostnameFormatDefault %stackname%-novacompute-%index% %stackname%-novacomputepci-%index% deprecated_nic_config_name compute.yaml compute-pci-passthrough.yaml Register the PCI passthrough Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide. Create the compute-pci-passthrough overcloud flavor for PCI passthrough Compute nodes: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Tag each bare metal node that you want to designate for PCI passthrough with a custom PCI passthrough resource class: Replace <node> with the ID of the bare metal node. Associate the compute-pci-passthrough flavor with the custom PCI passthrough resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances: Add the following parameters to the node-info.yaml file to specify the number of PCI passthrough Compute nodes, and the flavor to use for the PCI passthrough designated Compute nodes: To verify that the role was created, enter the following command: 5.2. Configuring a PCI passthrough Compute node To enable your cloud users to create instances with PCI devices attached, you must configure both the Compute nodes that have the PCI devices and the Controller nodes. Procedure Create an environment file to configure the Controller node on the overcloud for PCI passthrough, for example, pci_passthrough_controller.yaml . Add PciPassthroughFilter to the NovaSchedulerDefaultFilters parameter in pci_passthrough_controller.yaml : To specify the PCI alias for the devices on the Controller node, add the following configuration to pci_passthrough_controller.yaml : For more information about configuring the device_type field, see PCI passthrough device type field . Note If the nova-api service is running in a role different from the Controller role, replace ControllerExtraConfig with the user role in the format <Role>ExtraConfig . Optional: To set a default NUMA affinity policy for PCI passthrough devices, add numa_policy to the nova::pci::aliases: configuration from step 3: To configure the Compute node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthrough_compute.yaml . To specify the available PCIs for the devices on the Compute node, use the vendor_id and product_id options to add all matching PCI devices to the pool of PCI devices available for passthrough to instances. For example, to add all Intel(R) Ethernet Controller X710 devices to the pool of PCI devices available for passthrough to instances, add the following configuration to pci_passthrough_compute.yaml : For more information about how to configure NovaPCIPassthrough , see Guidelines for configuring NovaPCIPassthrough . You must create a copy of the PCI alias on the Compute node for instance migration and resize operations. To specify the PCI alias for the devices on the PCI passthrough Compute node, add the following to pci_passthrough_compute.yaml : Note The Compute node aliases must be identical to the aliases on the Controller node. Therefore, if you added numa_affinity to nova::pci::aliases in pci_passthrough_controller.yaml , then you must also add it to nova::pci::aliases in pci_passthrough_compute.yaml . To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, add the KernelArgs parameter to pci_passthrough_compute.yaml . For example, use the following KernalArgs settings to enable an Intel IOMMU: To enable an AMD IOMMU, set KernelArgs to "amd_iommu=on iommu=pt" . Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs . Add your custom environment files to the stack with your other environment files and deploy the overcloud: Create and configure the flavors that your cloud users can use to request the PCI devices. The following example requests two devices, each with a vendor ID of 8086 and a product ID of 1572 , using the alias defined in step 7: Optional: To override the default NUMA affinity policy for PCI passthrough devices, you can add the NUMA affinity policy property key to the flavor or the image: To override the default NUMA affinity policy by using the flavor, add the hw:pci_numa_affinity_policy property key: For more information about the valid values for hw:pci_numa_affinity_policy , see Flavor metadata . To override the default NUMA affinity policy by using the image, add the hw_pci_numa_affinity_policy property key: Note If you set the NUMA affinity policy on both the image and the flavor then the property values must match. The flavor setting takes precedence over the image and default settings. Therefore, the configuration of the NUMA affinity policy on the image only takes effect if the property is not set on the flavor. Verification Create an instance with a PCI passthrough device: Log in to the instance as a cloud user. For more information, see Connecting to an instance . To verify that the PCI device is accessible from the instance, enter the following command from the instance: 5.3. PCI passthrough device type field The Compute service categorizes PCI devices into one of three types, depending on the capabilities the devices report. The following lists the valid values that you can set the device_type field to: type-PF The device supports SR-IOV and is the parent or root device. Specify this device type to passthrough a device that supports SR-IOV in its entirety. type-VF The device is a child device of a device that supports SR-IOV. type-PCI The device does not support SR-IOV. This is the default device type if the device_type field is not set. Note You must configure the Compute and Controller nodes with the same device_type . 5.4. Guidelines for configuring NovaPCIPassthrough Do not use the devname parameter when configuring PCI passthrough, as the device name of a NIC can change. Instead, use vendor_id and product_id because they are more stable, or use the address of the NIC. To pass through a specific Physical Function (PF), you can use the address parameter because the PCI address is unique to each device. Alternatively, you can use the product_id parameter to pass through a PF, but you must also specify the address of the PF if you have multiple PFs of the same type. To pass through all the Virtual Functions (VFs) specify only the product_id and vendor_id of the VFs that you want to use for PCI passthrough. You must also specify the address of the VF if you are using SRIOV for NIC partitioning and you are running OVS on a VF. To pass through only the VFs for a PF but not the PF itself, you can use the address parameter to specify the PCI address of the PF and product_id to specify the product ID of the VF. Configuring the address parameter The address parameter specifies the PCI address of the device. You can set the value of the address parameter using either a String or a dict mapping. String format If you specify the address using a string you can include wildcards (*), as shown in the following example: Dictionary format If you specify the address using the dictionary format you can include regular expression syntax, as shown in the following example: Note The Compute service restricts the configuration of address fields to the following maximum values: domain - 0xFFFF bus - 0xFF slot - 0x1F function - 0x7 The Compute service supports PCI devices with a 16-bit address domain. The Compute service ignores PCI devices with a 32-bit address domain.
[ "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_pci_passthrough.yaml Compute:ComputePCI Compute Controller", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> compute-pci-passthrough", "(undercloud)USD openstack baremetal node set --resource-class baremetal.PCI-PASSTHROUGH <node>", "(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_PCI_PASSTHROUGH=1 compute-pci-passthrough", "(undercloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-pci-passthrough", "parameter_defaults: OvercloudComputePCIFlavor: compute-pci-passthrough ComputePCICount: 3", "(undercloud)USD openstack overcloud profiles list", "parameter_defaults: NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']", "parameter_defaults: ControllerExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\"", "parameter_defaults: ControllerExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\" numa_policy: \"preferred\"", "parameter_defaults: ComputePCIParameters: NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1572\"", "parameter_defaults: ComputePCIExtraConfig: nova::pci::aliases: - name: \"a1\" product_id: \"1572\" vendor_id: \"8086\" device_type: \"type-PF\"", "parameter_defaults: ComputePCIParameters: KernelArgs: \"intel_iommu=on iommu=pt\"", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/pci_passthrough_controller.yaml -e /home/stack/templates/pci_passthrough_compute.yaml \\", "(overcloud)# openstack flavor set --property \"pci_passthrough:alias\"=\"a1:2\" device_passthrough", "(overcloud)# openstack flavor set --property \"hw:pci_numa_affinity_policy\"=\"required\" device_passthrough", "(overcloud)# openstack image set --property hw_pci_numa_affinity_policy=required device_passthrough_image", "openstack server create --flavor device_passthrough --image <image> --wait test-pci", "lspci -nn | grep <device_name>", "NovaPCIPassthrough: - address: \"*:0a:00.*\" physical_network: physnet1", "NovaPCIPassthrough: - address: domain: \".*\" bus: \"02\" slot: \"01\" function: \"[0-2]\" physical_network: net1" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-pci-passthrough_pci-passthrough
Chapter 5. Setting Up Storage Volumes
Chapter 5. Setting Up Storage Volumes A Red Hat Gluster Storage volume is a logical collection of bricks, where each brick is an export directory on a server in the trusted storage pool. Most of the Red Hat Gluster Storage Server management operations are performed on the volume. For a detailed information about configuring Red Hat Gluster Storage for enhancing performance see, Chapter 19, Tuning for Performance Warning Red Hat does not support writing data directly into the bricks. Read and write data only through the Native Client, or through NFS or SMB mounts. Note Red Hat Gluster Storage supports IP over Infiniband (IPoIB). Install Infiniband packages on all Red Hat Gluster Storage servers and clients to support this feature. Run the yum groupinstall "Infiniband Support" to install Infiniband packages. Volume Types Distributed Distributes files across bricks in the volume. Use this volume type where scaling and redundancy requirements are not important, or provided by other hardware or software layers. See Section 5.4, "Creating Distributed Volumes" for additional information about this volume type. Replicated Replicates files across bricks in the volume. Use this volume type in environments where high-availability and high-reliability are critical. See Section 5.5, "Creating Replicated Volumes" for additional information about this volume type. Distributed Replicated Distributes files across replicated bricks in the volume. Use this volume type in environments where high-reliability and scalability are critical. This volume type offers improved read performance in most environments. See Section 5.6, "Creating Distributed Replicated Volumes" for additional information about this volume type. Arbitrated Replicated Replicates files across two bricks in a replica set, and replicates only metadata to the third brick. Use this volume type in environments where consistency is critical, but underlying storage space is at a premium. See Section 5.7, "Creating Arbitrated Replicated Volumes" for additional information about this volume type. Dispersed Disperses the file's data across the bricks in the volume. Use this volume type where you need a configurable level of reliability with a minimum space waste. See Section 5.8, "Creating Dispersed Volumes" for additional information about this volume type. Distributed Dispersed Distributes file's data across the dispersed sub-volume. Use this volume type where you need a configurable level of reliability with a minimum space waste. See Section 5.9, "Creating Distributed Dispersed Volumes" for additional information about this volume type. 5.1. Setting up Gluster Storage Volumes using gdeploy The gdeploy tool automates the process of creating, formatting, and mounting bricks. With gdeploy, the manual steps listed between Section 5.4 Formatting and Mounting Bricks and Section 5.10 Creating Distributed Dispersed Volumes are automated. When setting-up a new trusted storage pool, gdeploy could be the preferred choice of trusted storage pool set up, as manually executing numerous commands can be error prone. The advantages of using gdeploy to automate brick creation are as follows: Setting-up the backend on several machines can be done from one's laptop/desktop. This saves time and scales up well when the number of nodes in the trusted storage pool increase. Flexibility in choosing the drives to configure. (sd, vd, ...). Flexibility in naming the logical volumes (LV) and volume groups (VG). 5.1.1. Getting Started Prerequisites Generate the passphrase-less SSH keys for the nodes which are going to be part of the trusted storage pool by running the following command: Set up key-based SSH authentication access between the gdeploy controller and servers by running the following command: Note If you are using a Red Hat Gluster Storage node as the deployment node and not an external node, then the key-based SSH authentication must be set up for the Red Hat Gluster Storage node from where the installation is performed. Enable the repository required to install Ansible by running the following command: For Red Hat Enterprise Linux 8 For Red Hat Enterprise Linux 7 Install ansible by executing the following command: You must also ensure the following: Devices should be raw and unused Default system locale must be set to en_US For information on system locale, refer to the Setting the System Locale of the Red Hat Enterprise Linux 7 System Administrator's Guide . For multiple devices, use multiple volume groups, thinpool, and thinvol in the gdeploy configuration file For more information, see Installing Ansible to Support Gdeploy in Red Hat Gluster Storage 3.5 Installation Guide . gdeploy can be used to deploy Red Hat Gluster Storage in two ways: Using a node in a trusted storage pool Using a machine outside the trusted storage pool Using a node in a cluster The gdeploy package is bundled as part of the initial installation of Red Hat Gluster Storage. Using a machine outside the trusted storage pool You must ensure that the Red Hat Gluster Storage is subscribed to the required channels. For more information see, Subscribing to the Red Hat Gluster Storage Server Channels in the Red Hat Gluster Storage 3.5 Installation Guide . Execute the following command to install gdeploy: For more information on installing gdeploy see, Installing Ansible to Support Gdeploy section in the Red Hat Gluster Storage 3.5 Installation Guide . 5.1.2. Setting up a Trusted Storage Pool Creating a trusted storage pool is a tedious task and becomes more tedious as the nodes in the trusted storage pool grow. With gdeploy, just a configuration file can be used to set up a trusted storage pool. When gdeploy is installed, a sample configuration file will be created at: Note The trusted storage pool can be created either by performing each tasks, such as, setting up a backend, creating a volume, and mounting volumes independently or summed up as a single configuration. For example, for a basic trusted storage pool of a 3 x 3 replicated volume the configuration details in the configuration file will be as follows: 3x3-volume-create.conf : With this configuration a 3 x 3 replica trusted storage pool with the given IP addresses and backend device as /dev/sdb , /dev/sdc , /dev/sdd with the volume name as sample_volname will be created. For more information on possible values, see Section 5.1.7, "Configuration File" After modifying the configuration file, invoke the configuration using the command: Note You can create a new configuration file by referencing the template file available at /usr/share/doc/gdeploy/examples/gluster.conf.sample . To invoke the new configuration file, run gdeploy -c /path_to_file/config.txt command. To only setup the backend see, Section 5.1.3, "Setting up the Backend " To only create a volume see, Section 5.1.4, "Creating Volumes" To only mount clients see, Section 5.1.5, "Mounting Clients" 5.1.3. Setting up the Backend In order to setup a Gluster Storage volume, the LVM thin-p must be set up on the storage disks. If the number of machines in the trusted storage pool is huge, these tasks takes a long time, as the number of commands involved are huge and error prone if not cautious. With gdeploy, just a configuration file can be used to set up a backend. The backend is setup at the time of setting up a fresh trusted storage pool, which requires bricks to be setup before creating a volume. When gdeploy is installed, a sample configuration file will be created at: A backend can be setup in two ways: Using the [backend-setup] module Creating Physical Volume (PV), Volume Group (VG), and Logical Volume (LV) individually Note For Red Hat Enterprise Linux 6, the xfsprogs package must be installed before setting up the backend bricks using gdeploy. Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide 5.1.3.1. Using the [backend-setup] Module Backend setup can be done on specific machines or on all the machines. The backend-setup module internally creates PV, VG, and LV and mounts the device. Thin-p logical volumes are created as per the performance recommendations by Red Hat. The backend can be setup based on the requirement, such as: Generic Specific Generic If the disk names are uniform across the machines then backend setup can be written as below. The backend is setup for all the hosts in the `hosts' section. For more information on possible values, see Section 5.1.7, "Configuration File" Example configuration file: Backend-setup-generic.conf Specific If the disks names vary across the machines in the cluster then backend setup can be written for specific machines with specific disk names. gdeploy is quite flexible in allowing to do host specific setup in a single configuration file. For more information on possible values, see Section 5.1.7, "Configuration File" Example configuration file: backend-setup-hostwise.conf 5.1.3.2. Creating Backend by Setting up PV, VG, and LV If the user needs more control over setting up the backend, then pv, vg, and lv can be created individually. LV module provides flexibility to create more than one LV on a VG. For example, the `backend-setup' module setups up a thin-pool by default and applies default performance recommendations. However, if the user has a different use case which demands more than one LV, and a combination of thin and thick pools then `backend-setup' is of no help. The user can use PV, VG, and LV modules to achieve this. For more information on possible values, see Section 5.1.7, "Configuration File" The below example shows how to create four logical volumes on a single volume group. The examples shows a mix of thin and thickpool LV creation. Example to extend an existing VG: 5.1.4. Creating Volumes Setting up volume involves writing long commands by choosing the hostname/IP and brick order carefully and this could be error prone. gdeploy helps in simplifying this task. When gdeploy is installed, a sample configuration file will be created at: For example, for a basic trusted storage pool of a 4 x 3 replicate volume the configuration details in the configuration file will be as follows: For more information on possible values, see Section 5.1.7, "Configuration File" After modifying the configuration file, invoke the configuration using the command: Creating Multiple Volumes Note Support of creating multiple volumes only from gdeploy 2.0, please check your gdeploy version before trying this configuration. While creating multiple volumes in a single configuration, the [volume] modules should be numbered. For example, if there are two volumes they will be numbered [volume1], [volume2] vol-create.conf With gdeploy 2.0, a volume can be created with multiple volume options set. Number of keys should match number of values. The above configuration will create two volumes with multiple volume options set. 5.1.5. Mounting Clients When mounting clients, instead of logging into every client which has to be mounted, gdeploy can be used to mount clients remotely. When gdeploy is installed, a sample configuration file will be created at: Following is an example of the modifications to the configuration file in order to mount clients: Note If the file system type ( fstype ) is NFS, then mention it as nfs-version . The default version is 3 . For more information on possible values, see Section 5.1.7, "Configuration File" After modifying the configuration file, invoke the configuration using the command: 5.1.6. Configuring a Volume The volumes can be configured using the configuration file. The volumes can be configured remotely using the configuration file without having to log into the trusted storage pool. For more information regarding the sections and options in the configuration file, see Section 5.1.7, "Configuration File" 5.1.6.1. Adding and Removing a Brick The configuration file can be modified to add or remove a brick: Adding a Brick Modify the [volume] section in the configuration file to add a brick. For example: After modifying the configuration file, invoke the configuration using the command: Removing a Brick Modify the [volume] section in the configuration file to remove a brick. For example: Other options for state are stop, start, and force. After modifying the configuration file, invoke the configuration using the command: For more information on possible values, see Section 5.1.7, "Configuration File" 5.1.6.2. Rebalancing a Volume Modify the [volume] section in the configuration file to rebalance a volume. For example: Other options for state are stop, and fix-layout. After modifying the configuration file, invoke the configuration using the command: For more information on possible values, see Section 5.1.7, "Configuration File" 5.1.6.3. Starting, Stopping, or Deleting a Volume The configuration file can be modified to start, stop, or delete a volume: Starting a Volume Modify the [volume] section in the configuration file to start a volume. For example: After modifying the configuration file, invoke the configuration using the command: Stopping a Volume Modify the [volume] section in the configuration file to start a volume. For example: After modifying the configuration file, invoke the configuration using the command: Deleting a Volume Modify the [volume] section in the configuration file to start a volume. For example: After modifying the configuration file, invoke the configuration using the command: For more information on possible values, see Section 5.1.7, "Configuration File" 5.1.7. Configuration File The configuration file includes the various options that can be used to change the settings for gdeploy. The following options are currently supported: [hosts] [devices] [disktype] [diskcount] [stripesize] [vgs] [pools] [lvs] [mountpoints] [peer] [clients] [volume] [backend-setup] [pv] [vg] [lv] [RH-subscription] [yum] [shell] [update-file] [service] [script] [firewalld] [geo-replication] The options are briefly explained in the following list: hosts This is a mandatory section which contains the IP address or hostname of the machines in the trusted storage pool. Each hostname or IP address should be listed in a separate line. For example: devices This is a generic section and is applicable to all the hosts listed in the [hosts] section. However, if sections of hosts such as the [hostname] or [IP-address] is present, then the data in the generic sections like [devices] is ignored. Host specific data take precedence. This is an optional section. For example: Note When configuring the backend setup, the devices should be either listed in this section or in the host specific section. disktype This section specifies the disk configuration that is used while setting up the backend. gdeploy supports RAID 10, RAID 6, RAID 5, and JBOD configurations. This is an optional section and if the field is left empty, JBOD is taken as the default configuration. Valid values for this field are raid10 , raid6 , raid5 , and jbod . For example: diskcount This section specifies the number of data disks in the setup. This is a mandatory field if a RAID disk type is specified under [disktype] . If the [disktype] is JBOD the [diskcount] value is ignored. This parameter is host specific. For example: stripesize This section specifies the stripe_unit size in KB. Case 1: This field is not necessary if the [disktype] is JBOD, and any given value will be ignored. Case 2: This is a mandatory field if [disktype] is specified as RAID 5 or RAID 6. For [disktype] RAID 10, the default value is taken as 256KB. Red Hat does not recommend changing this value. If you specify any other value the following warning is displayed: Note Do not add any suffixes like K, KB, M, etc. This parameter is host specific and can be added in the hosts section. For example: vgs This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the volume group names for the devices listed in [devices]. The number of volume groups in the [vgs] section should match the one in [devices]. If the volume group names are missing, the volume groups will be named as GLUSTER_vg{1, 2, 3, ...} as default. For example: pools This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the pool names for the volume groups specified in the [vgs] section. The number of pools listed in the [pools] section should match the number of volume groups in the [vgs] section. If the pool names are missing, the pools will be named as GLUSTER_pool{1, 2, 3, ...}. For example: lvs This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section provides the logical volume names for the volume groups specified in [vgs]. The number of logical volumes listed in the [lvs] section should match the number of volume groups listed in [vgs]. If the logical volume names are missing, it is named as GLUSTER_lv{1, 2, 3, ...}. For example: mountpoints This section is deprecated in gdeploy 2.0. Please see [backend-setup] for more details for gdeploy 2.0. This section specifies the brick mount points for the logical volumes. The number of mount points should match the number of logical volumes specified in [lvs] If the mount points are missing, the mount points will be names as /gluster/brick{1, 2, 3...}. For example: peer This section specifies the configurations for the Trusted Storage Pool management (TSP). This section helps in making all the hosts specified in the [hosts] section to either probe each other to create the trusted storage pool or detach all of them from the trusted storage pool. The only option in this section is the option names 'action' which can have it's values to be either probe or detach. For example: clients This section specifies the client hosts and client_mount_points to mount the gluster storage volume created. The 'action' option is to be specified for the framework to determine the action that has to be performed. The options are 'mount' and 'unmount'. The Client hosts field is mandatory. If the mount points are not specified, default will be taken as /mnt/gluster for all the hosts. The option fstype specifies how the gluster volume is to be mounted. Default is glusterfs (FUSE mount). The volume can also be mounted as NFS. Each client can have different types of volume mount, which has to be specified with a comma separated. The following fields are included: For example: volume The section specifies the configuration options for the volume. The following fields are included in this section: action This option specifies what action must be performed in the volume. The choices can be [create, delete, add-brick, remove-brick]. create : This choice is used to create a volume. delete : If the delete choice is used, all the options other than 'volname' will be ignored. add-brick or remove-brick : If the add-brick or remove-brick is chosen, extra option bricks with a comma separated list of brick names(in the format <hostname>:<brick path> should be provided. In case of remove-brick, state option should also be provided specifying the state of the volume after brick removal. volname This option specifies the volume name. Default name is glustervol Note In case of a volume operation, the 'hosts' section can be omitted, provided volname is in the format <hostname>:<volname>, where hostname is the hostname / IP of one of the nodes in the cluster Only single volume creation/deletion/configuration is supported. transport This option specifies the transport type. Default is tcp. Options are tcp or rdma (Deprecated) or tcp,rdma. replica This option will specify if the volume should be of type replica. options are yes and no. Default is no. If 'replica' is provided as yes, the 'replica_count' should be provided. disperse This option specifies if the volume should be of type disperse. Options are yes and no. Default is no. disperse_count This field is optional even if 'disperse' is yes. If not specified, the number of bricks specified in the command line is taken as the disperse_count value. redundancy_count If this value is not specified, and if 'disperse' is yes, it's default value is computed so that it generates an optimal configuration. force This is an optional field and can be used during volume creation to forcefully create the volume. For example: backend-setup Available in gdeploy 2.0. This section sets up the backend for using with GlusterFS volume. If more than one backend-setup has to be done, they can be done by numbering the section like [backend-setup1], [backend-setup2], ... backend-setup section supports the following variables: devices: This replaces the [pvs] section in gdeploy 1.x. devices variable lists the raw disks which should be used for backend setup. For example: This is a mandatory field. dalign: The Logical Volume Manager can use a portion of the physical volume for storing its metadata while the rest is used as the data portion. Align the I/O at the Logical Volume Manager (LVM) layer using the dalign option while creating the physical volume. For example: For JBOD, use an alignment value of 256K. For hardware RAID, the alignment value should be obtained by multiplying the RAID stripe unit size with the number of data disks. If 12 disks are used in a RAID 6 configuration, the number of data disks is 10; on the other hand, if 12 disks are used in a RAID 10 configuration, the number of data disks is 6. The following example is appropriate for 12 disks in a RAID 6 configuration with a stripe unit size of 128 KiB: The following example is appropriate for 12 disks in a RAID 10 configuration with a stripe unit size of 256 KiB: To view the previously configured physical volume settings for the dalign option, run the pvs -o +pe_start device command. For example: You can also set the dalign option in the PV section. vgs: This is an optional variable. This variable replaces the [vgs] section in gdeploy 1.x. vgs variable lists the names to be used while creating volume groups. The number of VG names should match the number of devices or should be left blank. gdeploy will generate names for the VGs. For example: A pattern can be provided for the vgs like custom_vg{1..3}, this will create three vgs. pools: This is an optional variable. The variable replaces the [pools] section in gdeploy 1.x. pools lists the thin pool names for the volume. Similar to vg, pattern can be provided for thin pool names. For example custom_pool{1..3} lvs: This is an optional variable. This variable replaces the [lvs] section in gdeploy 1.x. lvs lists the logical volume name for the volume. Patterns for LV can be provided similar to vg. For example custom_lv{1..3}. mountpoints: This variable deprecates the [mountpoints] section in gdeploy 1.x. Mountpoints lists the mount points where the logical volumes should be mounted. Number of mount points should be equal to the number of logical volumes. For example: ssd - This variable is set if caching has to be added. For example, the backed setup with ssd for caching should be: Note Specifying the name of the data LV is necessary while adding SSD. Make sure the datalv is created already. Otherwise ensure to create it in one of the earlier `backend-setup' sections. PV Available in gdeploy 2.0. If the user needs to have more control over setting up the backend, and does not want to use backend-setup section, then pv, vg, and lv modules are to be used. The pv module supports the following variables. action: Mandatory. Supports two values, 'create' and 'resize' Example: Creating physical volumes Example: Creating physical volumes on a specific host devices: Mandatory. The list of devices to use for pv creation. expand: Used when action=resize . Example: Expanding an already created pv shrink: Used when action=resize . Example: Shrinking an already created pv dalign: The Logical Volume Manager can use a portion of the physical volume for storing its metadata while the rest is used as the data portion. Align the I/O at the Logical Volume Manager (LVM) layer using the dalign option while creating the physical volume. For example: For JBOD, use an alignment value of 256K. For hardware RAID, the alignment value should be obtained by multiplying the RAID stripe unit size with the number of data disks. If 12 disks are used in a RAID 6 configuration, the number of data disks is 10; on the other hand, if 12 disks are used in a RAID 10 configuration, the number of data disks is 6. The following example is appropriate for 12 disks in a RAID 6 configuration with a stripe unit size of 128 KiB: The following example is appropriate for 12 disks in a RAID 10 configuration with a stripe unit size of 256 KiB: To view the previously configured physical volume settings for the dalign option, run the pvs -o +pe_start device command. For example: You can also set the dalign option in the backend-setup section. VG Available in gdeploy 2.0. This module is used to create and extend volume groups. The vg module supports the following variables. action - Action can be one of create or extend. pvname - PVs to use to create the volume. For more than one PV use comma separated values. vgname - The name of the vg. If no name is provided GLUSTER_vg will be used as default name. one-to-one - If set to yes, one-to-one mapping will be done between pv and vg. If action is set to extend, the vg will be extended to include pv provided. Example1: Create a vg named images_vg with two PVs Example2: Create two vgs named rhgs_vg1 and rhgs_vg2 with two PVs Example3: Extend an existing vg with the given disk. LV Available in gdeploy 2.0. This module is used to create, setup-cache, and convert logical volumes. The lv module supports the following variables: action - The action variable allows three values `create', `setup-cache', `convert', and `change'. If the action is 'create', the following options are supported: lvname: The name of the logical volume, this is an optional field. Default is GLUSTER_lv poolname - Name of the thinpool volume name, this is an optional field. Default is GLUSTER_pool lvtype - Type of the logical volume to be created, allowed values are `thin' and `thick'. This is an optional field, default is thick. size - Size of the logical volume volume. Default is to take all available space on the vg. extent - Extent size, default is 100%FREE force - Force lv create, do not ask any questions. Allowed values `yes', `no'. This is an optional field, default is yes. vgname - Name of the volume group to use. pvname - Name of the physical volume to use. chunksize - The size of the chunk unit used for snapshots, cache pools, and thin pools. By default this is specified in kilobytes. For RAID 5 and 6 volumes, gdeploy calculates the default chunksize by multiplying the stripe size and the disk count. For RAID 10, the default chunksize is 256 KB. See Section 19.2, "Brick Configuration" for details. Warning Red Hat recommends using at least the default chunksize. If the chunksize is too small and your volume runs out of space for metadata, the volume is unable to create data. This includes the data required to increase the size of the metadata pool or to migrate data away from a volume that has run out of metadata space. Red Hat recommends monitoring your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full. poolmetadatasize - Sets the size of pool's metadata logical volume. Allocate the maximum chunk size (16 GiB) if possible. If you allocate less than the maximum, allocate at least 0.5% of the pool size to ensure that you do not run out of metadata space. Warning If your metadata pool runs out of space, you cannot create data. This includes the data required to increase the size of the metadata pool or to migrate data away from a volume that has run out of metadata space. Monitor your metadata pool using the lvs -o+metadata_percent command and ensure that it does not run out of space. virtualsize - Creates a thinly provisioned device or a sparse device of the given size mkfs - Creates a filesystem of the given type. Default is to use xfs. mkfs-opts - mkfs options. mount - Mount the logical volume. If the action is setup-cache, the below options are supported: ssd - Name of the ssd device. For example sda/vda/ ... to setup cache. vgname - Name of the volume group. poolname - Name of the pool. cache_meta_lv - Due to requirements from dm-cache (the kernel driver), LVM further splits the cache pool LV into two devices - the cache data LV and cache metadata LV. Provide the cache_meta_lv name here. cache_meta_lvsize - Size of the cache meta lv. cache_lv - Name of the cache data lv. cache_lvsize - Size of the cache data. force - Force If the action is convert, the below options are supported: lvtype - type of the lv, available options are thin and thick force - Force the lvconvert, default is yes. vgname - Name of the volume group. poolmetadata - Specifies cache or thin pool metadata logical volume. cachemode - Allowed values writeback, writethrough. Default is writethrough. cachepool - This argument is necessary when converting a logical volume to a cache LV. Name of the cachepool. lvname - Name of the logical volume. chunksize - The size of the chunk unit used for snapshots, cache pools, and thin pools. By default this is specified in kilobytes. For RAID 5 and 6 volumes, gdeploy calculates the default chunksize by multiplying the stripe size and the disk count. For RAID 10, the default chunksize is 256 KB. See Section 19.2, "Brick Configuration" for details. Warning Red Hat recommends using at least the default chunksize. If the chunksize is too small and your volume runs out of space for metadata, the volume is unable to create data. Red Hat recommends monitoring your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full. poolmetadataspare - Controls creation and maintanence of pool metadata spare logical volume that will be used for automated pool recovery. thinpool - Specifies or converts logical volume into a thin pool's data volume. Volume's name or path has to be given. If the action is change, the below options are supported: lvname - Name of the logical volume. vgname - Name of the volume group. zero - Set zeroing mode for thin pool. Example 1: Create a thin LV Example 2: Create a thick LV If there are more than one LVs, then the LVs can be created by numbering the LV sections, like [lv1], [lv2] ... RH-subscription Available in gdeploy 2.0. This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables: This module is used to subscribe, unsubscribe, attach, enable repos etc. The RH-subscription module allows the following variables: If the action is register , the following options are supported: username/activationkey: Username or activationkey. password/activationkey: Password or activation key auto-attach: true/false pool: Name of the pool. repos: Repos to subscribe to. disable-repos: Repo names to disable. Leaving this option blank will disable all the repos. ignore_register_errors: If set to no, gdeploy will exit if system registration fails. If the action is attach-pool the following options are supported: pool - Pool name to be attached. ignore_attach_pool_errors - If set to no, gdeploy fails if attach-pool fails. If the action is enable-repos the following options are supported: repos - List of comma separated repos that are to be subscribed to. ignore_enable_errors - If set to no, gdeploy fails if enable-repos fail. If the action is disable-repos the following options are supported: repos - List of comma separated repos that are to be subscribed to. ignore_disable_errors - If set to no, gdeploy fails if disable-repos fail If the action is unregister the systems will be unregistered. ignore_unregister_errors - If set to no, gdeploy fails if unregistering fails. Example 1: Subscribe to Red Hat Subscription network: Example 2: Disable all the repos: Example 3: Enable a few repos yum Available in gdeploy 2.0. This module is used to install or remove rpm packages, with the yum module we can add repos as well during the install time. The action variable allows two values `install' and `remove'. If the action is install the following options are supported: packages - Comma separated list of packages that are to be installed. repos - The repositories to be added. gpgcheck - yes/no values have to be provided. update - Whether yum update has to be initiated. If the action is remove then only one option has to be provided: remove - The comma separated list of packages to be removed. For example Install a package on a particular host. shell Available in gdeploy 2.0. This module allows user to run shell commands on the remote nodes. Currently shell provides a single action variable with value execute. And a command variable with any valid shell command as value. The below command will execute vdsm-tool on all the nodes. update-file Available in gdeploy 2.0. update-file module allows users to copy a file, edit a line in a file, or add new lines to a file. action variable can be any of copy, edit, or add. When the action variable is set to copy, the following variables are supported. src - The source path of the file to be copied from. dest - The destination path on the remote machine to where the file is to be copied to. When the action variable is set to edit, the following variables are supported. dest - The destination file name which has to be edited. replace - A regular expression, which will match a line that will be replaced. line - Text that has to be replaced. When the action variable is set to add, the following variables are supported. dest - File on the remote machine to which a line has to be added. line - Line which has to be added to the file. Line will be added towards the end of the file. Example 1: Copy a file to a remote machine. Example 2: Edit a line in the remote machine, in the below example lines that have allowed_hosts will be replaced with allowed_hosts=host.redhat.com Example 3: Add a line to the end of a file For Red Hat Enterprise Linux 7: For Red Hat Enterprise Linux 8: service Available in gdeploy 2.0. The service module allows user to start, stop, restart, reload, enable, or disable a service. The action variable specifies these values. When action variable is set to any of start, stop, restart, reload, enable, disable the variable servicename specifies which service to start, stop etc. service - Name of the service to start, stop etc. For Red Hat Enterprise Linux 7: Example: enable and start ntp daemon. For Red Hat Enterprise Linux 8: Example: enable and start chrony daemon. script Available in gdeploy 2.0. script module enables user to execute a script/binary on the remote machine. action variable is set to execute. Allows user to specify two variables file and args. file - An executable on the local machine. args - Arguments to the above program. Example: Execute script disable-multipath.sh on all the remote nodes listed in `hosts' section. firewalld Available in gdeploy 2.0. firewalld module allows the user to manipulate firewall rules. action variable supports two values `add' and `delete'. Both add and delete support the following variables: ports/services - The ports or services to add to firewall. permanent - Whether to make the entry permanent. Allowed values are true/false zone - Default zone is public For example: geo-replication Available in gdeploy 2.0.2, geo-replication module allows the user to configure geo-replication, control and verify geo-replication sessions. The following are the supported variables: action - The action to be performed for the geo-replication session. create - To create a geo-replication session. start - To start a created geo-replication session. stop - To stop a started geo-replication session. pause - To pause a geo-replication session. resume - To resume a paused geo-replication session. delete - To delete a geo-replication session. georepuser - Username to be used for the action being performed Important If georepuser variable is omitted, the user is assumed to be root user. mastervol - Master volume details in the following format: slavevol - Slave volume details in the following format: slavenodes - Slave node IP addresses in the following format: Important Slave IP addresses must be comma (,) separated. force - Force the system to perform the action. Allowed values are yes or no . start - Start the action specified in the configuration file. Allowed values are yes or no . Default value is yes . For example: 5.1.8. Deploying NFS Ganesha using gdeploy gdeploy supports the deployment and configuration of NFS Ganesha on Red Hat Gluster Storage 3.5, from gdeploy version 2.0.2-35. NFS-Ganesha is a user space file server for the NFS protocol. For more information about NFS-Ganesha see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html-single/administration_guide/#nfs_ganesha 5.1.8.1. Prerequisites Ensure that the following prerequisites are met: Subscribing to Subscription Manager You must subscribe to subscription manager and obtain the NFS Ganesha packages before continuing further. Add the following details to the configuration file to subscribe to subscription manager: Execute the following command to run the configuration file: Enabling Repos To enable the required repos, add the following details in the configuration file: Execute the following command to run the configuration file: Enabling Firewall Ports To enable the firewall ports, add the following details in the configuration file: Note To ensure NFS client UDP mount does not fail, ensure to add port 2049/udp in [firewalld] section of gdeploy. Execute the following command to run the configuration file: Installing the Required Package: To install the required package, add the following details in the configuration file Execute the following command to run the configuration file: 5.1.8.2. Supported Actions The NFS Ganesha module in gdeploy allows the user to perform the following actions: Creating a Cluster Destroying a Cluster Adding a Node Deleting a Node Exporting a Volume Unexporting a Volume Refreshing NFS Ganesha Configuration Creating a Cluster This action creates a fresh NFS-Ganesha setup on a given volume. For this action the nfs-ganesha in the configuration file section supports the following variables: ha-name : This is an optional variable. By default it is ganesha-ha-360. cluster-nodes : This is a required argument. This variable expects comma separated values of cluster node names, which is used to form the cluster. vip : This is a required argument. This variable expects comma separated list of ip addresses. These will be the virtual ip addresses. volname : This is an optional variable if the configuration contains the [volume] section For example: To create a NFS-Ganesha cluster add the following details in the configuration file: In the above example, it is assumed that the required packages are installed, a volume is created and NFS-Ganesha is enabled on it. Execute the configuration using the following command: Destroying a Cluster The action, destroy-cluster cluster disables NFS Ganesha. It allows one variable, cluster-nodes . For example: To destroy a NFS-Ganesha cluster add the following details in the configuration file: Execute the configuration using the following command: Adding a Node The add-node action allows three variables: nodes : Accepts a list of comma separated hostnames that have to be added to the cluster vip : Accepts a list of comma separated ip addresses. cluster_nodes : Accepts a list of comma separated nodes of the NFS Ganesha cluster. For example, to add a node, add the following details to the configuration file: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Execute the configuration using the following command: Deleting a Node The delete-node action takes one variable, nodes , which specifies the node or nodes to delete from the NFS Ganesha cluster in a comma delimited list. For example: Exporting a Volume This action exports a volume. export-volume action supports one variable, volname . For example, to export a volume, add the following details to the configuration file: Execute the configuration using the following command: Unexporting a Volume: This action unexports a volume. unexport-volume action supports one variable, volname . For example, to unexport a volume, add the following details to the configuration file: Execute the configuration using the following command: Refreshing NFS Ganesha Configuration This action will add/delete or add a config block to the configuration file and runs refresh-config on the cluster. The action refresh-config supports the following variables: del-config-lines block-name volname ha-conf-dir update_config_lines Example 1 - To add a client block and run refresh-config add the following details to the configuration file: Note refresh-config with client block has few limitations: Works for only one client User cannot delete a line from a config block Execute the configuration using the following command: Example 2 - To delete a line and run refresh-config add the following details to the configuration file: Execute the configuration using the following command: Example 3 - To run refresh-config on a volume add the following details to the configuration file: Execute the configuration using the following command: Example 4 - To modify a line and run refresh-config add the following details to the configuration file: Execute the configuration using the following command: 5.1.9. Deploying Samba / CTDB using gdeploy The Server Message Block (SMB) protocol can be used to access Red Hat Gluster Storage volumes by exporting directories in GlusterFS volumes as SMB shares on the server. In Red Hat Gluster Storage, Samba is used to share volumes through SMB protocol. 5.1.9.1. Prerequisites Ensure that the following prerequisites are met: Subscribing to Subscription Manager You must subscribe to subscription manager and obtain the Samba packages before continuing further. Add the following details to the configuration file to subscribe to subscription manager: Execute the following command to run the configuration file: Enabling Repos To enable the required repos, add the following details in the configuration file: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Execute the following command to run the configuration file: Enabling Firewall Ports To enable the firewall ports, add the following details in the configuration file: Execute the following command to run the configuration file: Installing the Required Package: To install the required package, add the following details in the configuration file Execute the following command to run the configuration file: 5.1.9.2. Setting up Samba Samba can be enabled in two ways: Enabling Samba on an existing volume Enabling Samba while creating a volume Enabling Samba on an existing volume If a Red Hat Gluster Storage volume is already present, then the user has to mention the action as smb-setup in the volume section. It is necessary to mention all the hosts that are in the cluster, as gdeploy updates the glusterd configuration files on each of the hosts. For example, to enable Samba on an existing volume, add the following details to the configuration file: Note Ensure that the hosts are not part of the CTDB cluster. Execute the configuration using the following command: Enabling Samba while creating a Volume If Samba has be set up while creating a volume, the a variable smb has to be set to yes in the configuration file. For example, to enable Samba while creating a volume, add the following details to the configuration file: Execute the configuration using the following command: Note In both the cases of enabling Samba, smb_username and smb_mountpoint are necessary if samba has to be setup with the acls set correctly. 5.1.9.3. Setting up CTDB Using CTDB requires setting up a separate volume in order to protect the CTDB lock file. Red Hat recommends a replicated volume where the replica count is equal to the number of servers being used as Samba servers. The following configuration file sets up a CTDB volume across two hosts that are also Samba servers. You can configure the CTDB cluster to use separate IP addresses by using the ctdb_nodes parameter, as shown in the following example. Execute the configuration using the following command: 5.1.10. Enabling SSL on a Volume You can create volumes with SSL enabled, or enable SSL on an exisiting volumes using gdeploy (v2.0.1 onwards). This section explains how the configuration files should be written for gdeploy to enable SSL. 5.1.10.1. Creating a Volume and Enabling SSL To create a volume and enable SSL on it, add the following details to the configuration file: In the above example, a volume named vol1 is created and SSL is enabled on it. gdeploy creates self signed certificates. After adding the details to the configuration file, execute the following command to run the configuration file: 5.1.10.2. Enabling SSL on an Existing Volume: To enable SSL on an existing volume, add the following details to the configuration file: After adding the details to the configuration file, execute the following command to run the configuration file: 5.1.11. Gdeploy log files Because gdeploy is usually run by non-privileged users, by default, gdeploy log files are written to /home/ username /.gdeploy/logs/gdeploy.log instead of the /var/log directory. You can change the log location by setting a different location as the value of the GDEPLOY_LOGFILE environment variable. For example, to set the gdeploy log location to /var/log/gdeploy/gdeploy.log for this session, run the following command: To persistently set this as the default log location for this user, add the same command as a separate line in the /home/ username /.bash_profile file for that user.
[ "ssh-keygen -t rsa -N ''", "ssh-copy-id -i root@ server", "subscription-manager repos --enable=ansible-2-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhel-7-server-ansible-2-rpms", "yum install ansible", "yum install gdeploy", "/usr/share/doc/gdeploy/examples/gluster.conf.sample", "# Usage: gdeploy -c 3x3-volume-create.conf # This does backend setup first and then create the volume using the setup bricks. # # [hosts] 10.70.46.13 10.70.46.17 10.70.46.21 Common backend setup for 2 of the hosts. [backend-setup] devices=sdb,sdc,sdd vgs=vg1,vg2,vg3 pools=pool1,pool2,pool3 lvs=lv1,lv2,lv3 mountpoints=/rhgs/brick1,/rhgs/brick2,/rhgs/brick3 brick_dirs=/rhgs/brick1/b1,/rhgs/brick2/b2,/rhgs/brick3/b3 If backend-setup is different for each host [backend-setup:10.70.46.13] devices=sdb brick_dirs=/rhgs/brick1 # [backend-setup:10.70.46.17] devices=sda,sdb,sdc brick_dirs=/rhgs/brick{1,2,3} # [volume] action=create volname=sample_volname replica=yes replica_count=3 force=yes [clients] action=mount volname=sample_volname hosts=10.70.46.15 fstype=glusterfs client_mount_points=/mnt/gluster", "gdeploy -c txt.conf", "/usr/share/doc/gdeploy/examples/gluster.conf.sample", "# Usage: gdeploy -c backend-setup-generic.conf # This configuration creates backend for GlusterFS clusters # [hosts] 10.70.46.130 10.70.46.32 10.70.46.110 10.70.46.77 Backend setup for all the nodes in the `hosts' section. This will create PV, VG, and LV with gdeploy generated names. [backend-setup] devices=vdb", "# Usage: gdeploy -c backend-setup-hostwise.conf # This configuration creates backend for GlusterFS clusters # [hosts] 10.70.46.130 10.70.46.32 10.70.46.110 10.70.46.77 Backend setup for 10.70.46.77 with default gdeploy generated names for Volume Groups and Logical Volumes. Volume names will be GLUSTER_vg1, GLUSTER_vg2 [backend-setup:10.70.46.77] devices=vda,vdb Backend setup for remaining 3 hosts in the `hosts' section with custom names for Volumes Groups and Logical Volumes. [backend-setup:10.70.46.{130,32,110}] devices=vdb,vdc,vdd vgs=vg1,vg2,vg3 pools=pool1,pool2,pool3 lvs=lv1,lv2,lv3 mountpoints=/rhgs/brick1,/rhgs/brick2,/rhgs/brick3 brick_dirs=/rhgs/brick1/b1,/rhgs/brick2/b2,/rhgs/brick3/b3", "[hosts] 10.70.46.130 10.70.46.32 [pv] action=create devices=vdb [vg1] action=create vgname=RHS_vg1 pvname=vdb [lv1] action=create vgname=RHS_vg1 lvname=engine_lv lvtype=thick size=10GB mount=/rhgs/brick1 [lv2] action=create vgname=RHS_vg1 poolname=lvthinpool lvtype=thinpool poolmetadatasize=200MB chunksize=1024k size=30GB [lv3] action=create lvname=lv_vmaddldisks poolname=lvthinpool vgname=RHS_vg1 lvtype=thinlv mount=/rhgs/brick2 virtualsize=9GB [lv4] action=create lvname=lv_vmrootdisks poolname=lvthinpool vgname=RHS_vg1 size=19GB lvtype=thinlv mount=/rhgs/brick3 virtualsize=19GB", "# Extends a given given VG. pvname and vgname is mandatory, in this example the vg `RHS_vg1' is extended by adding pv, vdd. If the pv is not alreay present, it is created by gdeploy. # [hosts] 10.70.46.130 10.70.46.32 [vg2] action=extend vgname=RHS_vg1 pvname=vdd", "/usr/share/doc/gdeploy/examples/gluster.conf.sample", "[hosts] 10.0.0.1 10.0.0.2 10.0.0.3 10.0.0.4 [volume] action=create volname=glustervol transport=tcp,rdma replica=yes replica_count=3 brick_dirs=/glus/brick1/b1,/glus/brick1/b1,/glus/brick1/b1 force=yes", "gdeploy -c txt.conf", "[hosts] 10.70.46.130 10.70.46.32 10.70.46.16 [backend-setup] devices=vdb,vdc,vdd,vde mountpoints=/mnt/data{1-6} brick_dirs=/mnt/data1/1,/mnt/data2/2,/mnt/data3/3,/mnt/data4/4,/mnt/data5/5,/mnt/data6/6 [volume1] action=create volname=vol-one transport=tcp replica=yes replica_count=3 brick_dirs=/mnt/data1/1,/mnt/data2/2,/mnt/data5/5 [volume2] action=create volname=vol-two transport=tcp replica=yes replica_count=3 brick_dirs=/mnt/data3/3,/mnt/data4/4,/mnt/data6/6", "[hosts] 10.70.46.130 10.70.46.32 10.70.46.16 [backend-setup] devices=vdb,vdc mountpoints=/mnt/data{1-6} [volume1] action=create volname=vol-one transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm value=virt,36,36,on,512MB,32,full brick_dirs=/mnt/data1/1,/mnt/data3/3,/mnt/data5/5 [volume2] action=create volname=vol-two transport=tcp replica=yes key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm value=virt,36,36,on,512MB,32,full replica_count=3 brick_dirs=/mnt/data2/2,/mnt/data4/4,/mnt/data6/6", "/usr/share/doc/gdeploy/examples/gluster.conf.sample", "[clients] action=mount hosts=10.70.46.159 fstype=glusterfs client_mount_points=/mnt/gluster volname=10.0.0.1:glustervol", "gdeploy -c txt.conf", "[volume] action=add-brick volname=10.0.0.1:glustervol bricks=10.0.0.1:/rhgs/new_brick", "gdeploy -c txt.conf", "[volume] action=remove-brick volname=10.0.0.1:glustervol bricks=10.0.0.2:/rhgs/brick state=commit", "gdeploy -c txt.conf", "[volume] action=rebalance volname=10.70.46.13:glustervol state=start", "gdeploy -c txt.conf", "[volume] action=start volname=10.0.0.1:glustervol", "gdeploy -c txt.conf", "[volume] action=stop volname=10.0.0.1:glustervol", "gdeploy -c txt.conf", "[volume] action=delete volname=10.70.46.13:glustervol", "gdeploy -c txt.conf", "[hosts] 10.0.0.1 10.0.0.2", "[devices] /dev/sda /dev/sdb", "[disktype] raid6", "[diskcount] 10", "\"Warning: We recommend a stripe unit size of 256KB for RAID 10\"", "[stripesize] 128", "[vgs] CUSTOM_vg1 CUSTOM_vg2", "[pools] CUSTOM_pool1 CUSTOM_pool2", "[lvs] CUSTOM_lv1 CUSTOM_lv2", "[mountpoints] /rhgs/brick1 /rhgs/brick2", "[peer] action=probe", "* action * hosts * fstype * client_mount_points", "[clients] action=mount hosts=10.0.0.10 fstype=nfs options=vers=3 client_mount_points=/mnt/rhs", "* action * volname * transport * replica * replica_count * disperse * disperse_count * redundancy_count * force", "[volname] action=create volname=glustervol transport=tcp,rdma replica=yes replica_count=3 force=yes", "[backend-setup] devices=sda,sdb,sdc", "[backend-setup] devices=sdb,sdc,sdd,sde dalign=256k", "[backend-setup] devices=sdb,sdc,sdd,sde dalign=1280k", "[backend-setup] devices=sdb,sdc,sdd,sde dalign=1536k", "pvs -o +pe_start /dev/sdb PV VG Fmt Attr PSize PFree 1st PE /dev/sdb lvm2 a-- 9.09t 9.09t 1.25m", "[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3", "[backend-setup] devices=sda,sdb,sdc vgs=custom_vg{1..3}", "[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3", "[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3 lvs=custom_lv1,custom_lv2,custom_lv3", "[backend-setup] devices=sda,sdb,sdc vgs=custom_vg1,custom_vg2,custom_vg3 pools=custom_pool1,custom_pool2,custom_pool3 lvs=custom_lv1,custom_lv2,custom_lv3 mountpoints=/gluster/data1,/gluster/data2,/gluster/data3", "[backend-setup] ssd=sdc vgs=RHS_vg1 datalv=lv_data cachedatalv=lv_cachedata:1G cachemetalv=lv_cachemeta:230G", "[pv] action=create devices=vdb,vdc,vdd", "[pv:10.0.5.2] action=create devices=vdb,vdc,vdd", "[pv] action=resize devices=vdb expand=yes", "[pv] action=resize devices=vdb shrink=100G", "[pv] action=create devices=sdb,sdc,sdd,sde dalign=256k", "[pv] action=create devices=sdb,sdc,sdd,sde dalign=1280k", "[pv] action=create devices=sdb,sdc,sdd,sde dalign=1536k", "pvs -o +pe_start /dev/sdb PV VG Fmt Attr PSize PFree 1st PE /dev/sdb lvm2 a-- 9.09t 9.09t 1.25m", "[vg] action=create vgname=images_vg pvname=sdb,sdc", "[vg] action=create vgname=rhgs_vg pvname=sdb,sdc one-to-one=yes", "[vg] action=extend vgname=rhgs_images pvname=sdc", "[lv] action=create vgname=RHGS_vg1 poolname=lvthinpool lvtype=thinpool poolmetadatasize=200MB chunksize=1024k size=30GB", "[lv] action=create vgname=RHGS_vg1 lvname=engine_lv lvtype=thick size=10GB mount=/rhgs/brick1", "[RH-subscription1] action=register [email protected] password=<passwd> pool=<pool> ignore_register_errors=no", "[RH-subscription2] action=disable-repos repos=*", "[RH-subscription3] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rhel-7-server-rhev-mgmt-agent-rpms ignore_enable_errors=no", "[yum1] action=install gpgcheck=no Repos should be an url; eg: http://repo-pointing-glusterfs-builds repos=<glusterfs.repo>,<vdsm.repo> packages=vdsm,vdsm-gluster,ovirt-hosted-engine-setup,screen,xauth update=yes", "[yum2:host1] action=install gpgcheck=no packages=rhevm-appliance", "[shell] action=execute command=vdsm-tool configure --force", "[update-file] action=copy src=/tmp/foo.cfg", "[update-file] action=edit replace=allowed_hosts line=allowed_hosts=host.redhat.com", "[update-file] action=add dest=/etc/ntp.conf line=server clock.redhat.com iburst", "[update-file] action=add dest=/etc/chrony.conf line=server 0.rhel.pool.ntp.org iburst", "[service1] action=enable service=ntpd", "[service2] action=restart service=ntpd", "[service1] action=enable service=chrony", "[service2] action=restart service=chrony", "[script] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh", "[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs", "Master_HostName : Master_VolName", "Slave_HostName : Slave_VolName", "Slave1_IPAddress , Slave2_IPAddress", "[geo-replication] action=create georepuser=testgeorep mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavevolume slavenodes=10.1.1.28,10.1.1.86 force=yes start=yes", "[RH-subscription1] action=register username=<user>@redhat.com password=<password> pool=<pool-id>", "gdeploy -c txt.conf", "[RH-subscription2] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-nfs-for-rhel-7-server-rpms,rhel-ha-for-rhel-7-server-rpms,rhel-7-server-ansible-2-rpms", "gdeploy -c txt.conf", "[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota", "gdeploy -c txt.conf", "[yum] action=install repolist= gpgcheck=no update=no packages=glusterfs-ganesha", "gdeploy -c txt.conf", "[hosts] host-1.example.com host-2.example.com host-3.example.com host-4.example.com [backend-setup] devices=/dev/vdb vgs=vg1 pools=pool1 lvs=lv1 mountpoints=/mnt/brick [firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,662/tcp,662/udp services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota [volume] action=create volname=ganesha transport=tcp replica_count=3 force=yes #Creating a high availability cluster and exporting the volume [nfs-ganesha] action=create-cluster ha-name=ganesha-ha-360 cluster-nodes=host-1.example.com,host-2.example.com,host-3.example.com,host-4 .example.com vip=10.70.44.121,10.70.44.122 volname=ganesha ignore_ganesha_errors=no", "gdeploy -c txt.conf", "[hosts] host-1.example.com host-2.example.com To destroy the high availability cluster [nfs-ganesha] action=destroy-cluster cluster-nodes=host-1.example.com,host-2.example.com", "gdeploy -c txt.conf", "[hosts] host-1.example.com host-2.example.com host-3.example.com [peer] action=probe [clients] action=mount volname=host-3.example.com:gluster_shared_storage hosts=host-3.example.com fstype=glusterfs client_mount_points=/var/run/gluster/shared_storage/ [nfs-ganesha] action=add-node nodes=host-3.example.com cluster_nodes=host-1.example.com,host-2.example.com vip=10.0.0.33", "gdeploy -c txt.conf", "[hosts] host-1.example.com host-2.example.com host-3.example.com host-4.example.com [nfs-ganesha] action=delete-node nodes=host-2.example.com", "[hosts] host-1.example.com host-2.example.com [nfs-ganesha] action=export-volume volname=ganesha", "gdeploy -c txt.conf", "[hosts] host-1.example.com host-2.example.com [nfs-ganesha] action=unexport-volume volname=ganesha", "gdeploy -c txt.conf", "[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config Default block name is `client' block-name=client config-block=clients = 10.0.0.1;|allow_root_access = true;|access_type = \"RO\";|Protocols = \"2\", \"3\";|anonymous_uid = 1440;|anonymous_gid = 72; volname=ganesha", "gdeploy -c txt.conf", "[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config del-config-lines=client volname=ganesha", "gdeploy -c txt.conf", "[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config volname=ganesha", "gdeploy -c txt.conf", "[hosts] host1-example.com host2-example.com [nfs-ganesha] action=refresh-config update_config_lines=Access_type = \"RO\"; #update_config_lines=Protocols = \"4\"; #update_config_lines=clients = 10.0.0.1; volname=ganesha", "gdeploy -c txt.conf", "[RH-subscription1] action=register username=<user>@redhat.com password=<password> pool=<pool-id>", "gdeploy -c txt.conf", "[RH-subscription2] action=enable-repos repos=rhel-7-server-rpms,rh-gluster-3-for-rhel-7-server-rpms,rh-gluster-3-samba-for-rhel-7-server-rpms,rhel-7-server-ansible-2-rpms", "[RH-subscription2] action=enable-repos rh-gluster-3-for-rhel-8-x86_64-rpms,ansible-2-for-rhel-8-x86_64-rpms,rhel-8-for-x86_64-baseos-rpms,rhel-8-for-x86_64-appstream-rpms,rhel-8-for-x86_64-highavailability-rpms,rh-gluster-3-samba-for-rhel-8-x86_64-rpms", "gdeploy -c txt.conf", "[firewalld] action=add ports=54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,4379/tcp services=glusterfs,samba,high-availability", "gdeploy -c txt.conf", "[yum] action=install repolist= gpgcheck=no update=no packages=samba,samba-client,glusterfs-server,ctdb", "gdeploy -c txt.conf", "[hosts] 10.70.37.192 10.70.37.88 [volume] action=smb-setup volname=samba1 force=yes smb_username=smbuser smb_mountpoint=/mnt/smb", "gdeploy -c txt.conf", "[hosts] 10.70.37.192 10.70.37.88 10.70.37.65 [backend-setup] devices=/dev/vdb vgs=vg1 pools=pool1 lvs=lv1 mountpoints=/mnt/brick [volume] action=create volname=samba1 smb=yes force=yes smb_username=smbuser smb_mountpoint=/mnt/smb", "gdeploy -c txt.conf", "[hosts] 10.70.37.192 10.70.37.88 10.70.37.65 [volume] action=create volname=ctdb transport=tcp replica_count=3 force=yes [ctdb] action=setup public_address=10.70.37.6/24 eth0,10.70.37.8/24 eth0 volname=ctdb", "[hosts] 10.70.37.192 10.70.37.88 10.70.37.65 [volume] action=create volname=ctdb transport=tcp replica_count=3 force=yes [ctdb] action=setup public_address=10.70.37.6/24 eth0,10.70.37.8/24 eth0 ctdb_nodes=192.168.1.1,192.168.2.5 volname=ctdb", "gdeploy -c txt.conf", "[hosts] 10.70.37.147 10.70.37.47 10.70.37.13 [backend-setup] devices=/dev/vdb vgs=vg1 pools=pool1 lvs=lv1 mountpoints=/mnt/brick [volume] action=create volname=vol1 transport=tcp replica_count=3 force=yes enable_ssl=yes ssl_clients=10.70.37.107,10.70.37.173 brick_dirs=/data/1 [clients] action=mount hosts=10.70.37.173,10.70.37.107 volname=vol1 fstype=glusterfs client_mount_points=/mnt/data", "gdeploy -c txt.conf", "[hosts] 10.70.37.147 10.70.37.47 It is important for the clients to be unmounted before setting up SSL [clients1] action=unmount hosts=10.70.37.173,10.70.37.107 client_mount_points=/mnt/data [volume] action=enable-ssl volname=vol2 ssl_clients=10.70.37.107,10.70.37.173 [clients2] action=mount hosts=10.70.37.173,10.70.37.107 volname=vol2 fstype=glusterfs client_mount_points=/mnt/data", "gdeploy -c txt.conf", "export GDEPLOY_LOGFILE=/var/log/gdeploy/gdeploy.log" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Red_Hat_Storage_Volumes
Chapter 70. Kubernetes Custom Resources
Chapter 70. Kubernetes Custom Resources Since Camel 3.7 Both producer and consumer are supported The Kubernetes Custom Resources component is one of the Kubernetes Components which provides a producer to execute Kubernetes Custom Resources operations and a consumer to consume events related to Node objects. 70.1. Dependencies When using kubernetes-custom-resources with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 70.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 70.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 70.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 70.3. Component Options The Kubernetes Custom Resources component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 70.4. Endpoint Options The Kubernetes Custom Resources endpoint is configured using URI syntax: with the following path and query parameters: 70.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 70.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 70.5. Message Headers The Kubernetes Custom Resources component supports 13 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesCRDInstanceName (producer) Constant: KUBERNETES_CRD_INSTANCE_NAME The deployment name. String CamelKubernetesCRDEventTimestamp (consumer) Constant: KUBERNETES_CRD_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long CamelKubernetesCRDEventAction (consumer) Constant: KUBERNETES_CRD_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesCRDName (producer) Constant: KUBERNETES_CRD_NAME The Consumer CRD Resource name we would like to watch. String CamelKubernetesCRDGroup (producer) Constant: KUBERNETES_CRD_GROUP The Consumer CRD Resource Group we would like to watch. String CamelKubernetesCRDScope (producer) Constant: KUBERNETES_CRD_SCOPE The Consumer CRD Resource Scope we would like to watch. String CamelKubernetesCRDVersion (producer) Constant: KUBERNETES_CRD_VERSION The Consumer CRD Resource Version we would like to watch. String CamelKubernetesCRDPlural (producer) Constant: KUBERNETES_CRD_PLURAL The Consumer CRD Resource Plural we would like to watch. String CamelKubernetesCRDLabels (producer) Constant: KUBERNETES_CRD_LABELS The CRD resource labels. Map CamelKubernetesCRDInstance (producer) Constant: KUBERNETES_CRD_INSTANCE The manifest of the CRD resource to create as JSON string. String CamelKubernetesDeleteResult (producer) Constant: KUBERNETES_DELETE_RESULT The result of the delete operation. boolean 70.6. Supported producer operation listCustomResources listCustomResourcesByLabels getCustomResource deleteCustomResource createCustomResource updateCustomResource 70.7. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-custom-resources:masterUrl" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-custom-resources-component-starter
Chapter 2. An overview of OpenShift Data Foundation architecture
Chapter 2. An overview of OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation provides services for, and can run internally from Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on Installer Provisioned Infrastructure or User Provisioned Infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for the Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see the interoperability matrix . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/red_hat_openshift_data_foundation_architecture/an-overview-of-openshift-data-foundation-architecture_rhodf
4.3. Displaying Device-Specific Fencing Options
4.3. Displaying Device-Specific Fencing Options Use the following command to view the options for the specified STONITH agent. For example, the following command displays the options for the fence agent for APC over telnet/SSH.
[ "pcs stonith describe stonith_agent", "pcs stonith describe fence_apc Stonith options for: fence_apc ipaddr (required): IP Address or Hostname login (required): Login Name passwd: Login password or passphrase passwd_script: Script to retrieve password cmd_prompt: Force command prompt secure: SSH connection port (required): Physical plug number or name of virtual machine identity_file: Identity file for ssh switch: Physical switch number on device inet4_only: Forces agent to use IPv4 addresses only inet6_only: Forces agent to use IPv6 addresses only ipport: TCP port to use for connection with device action (required): Fencing Action verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit separator: Separator for CSV created by operation list power_timeout: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuing command login_timeout: Wait X seconds for cmd prompt after login power_wait: Wait X seconds after issuing ON/OFF delay: Wait X seconds before fencing is started retry_on: Count of attempts to retry power on" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicespecific-HAAR
Appendix A. Image configuration parameters
Appendix A. Image configuration parameters The following keys can be used with the property option for both the glance image-update and glance image-create commands. Table A.1. Property keys Specific to Key Description Supported values All architecture The CPU architecture that must be supported by the hypervisor. For example, x86_64 , arm , or ppc64 . Run uname -m to get the architecture of a machine. alpha - DEC 64-bit RISC armv7l - ARM Cortex-A7 MPCore cris - Ethernet, Token Ring, AXis-Code Reduced Instruction Set i686 - Intel sixth-generation x86 (P6 micro architecture) ia64 - Itanium lm32 - Lattice Micro32 m68k - Motorola 68000 microblaze - Xilinx 32-bit FPGA (Big Endian) microblazeel - Xilinx 32-bit FPGA (Little Endian) mips - MIPS 32-bit RISC (Big Endian) mipsel - MIPS 32-bit RISC (Little Endian) mips64 - MIPS 64-bit RISC (Big Endian) mips64el - MIPS 64-bit RISC (Little Endian) openrisc - OpenCores RISC parisc - HP Precision Architecture RISC parisc64 - HP Precision Architecture 64-bit RISC ppc - PowerPC 32-bit ppc64 - PowerPC 64-bit ppcemb - PowerPC (Embedded 32-bit) s390 - IBM Enterprise Systems Architecture/390 s390x - S/390 64-bit sh4 - SuperH SH-4 (Little Endian) sh4eb - SuperH SH-4 (Big Endian) sparc - Scalable Processor Architecture, 32-bit sparc64 - Scalable Processor Architecture, 64-bit unicore32 - Microprocessor Research and Development Center RISC Unicore32 x86_64 - 64-bit extension of IA-32 xtensa - Tensilica Xtensa configurable microprocessor core xtensaeb - Tensilica Xtensa configurable microprocessor core (Big Endian) All hypervisor_type The hypervisor type. kvm , vmware All instance_uuid For snapshot images, this is the UUID of the server used to create this image. Valid server UUID All kernel_id The ID of an image stored in the Image Service that should be used as the kernel when booting an AMI-style image. Valid image ID All os_distro The common name of the operating system distribution in lowercase. arch - Arch Linux. Do not use archlinux or org.archlinux . centos - Community Enterprise Operating System. Do not use org.centos or CentOS . debian - Debian. Do not use Debian or org.debian . fedora - Fedora. Do not use Fedora , org.fedora , or org.fedoraproject . freebsd - FreeBSD. Do not use org.freebsd , freeBSD , or FreeBSD . gentoo - Gentoo Linux. Do not use Gentoo or org.gentoo . mandrake - Mandrakelinux (MandrakeSoft) distribution. Do not use mandrakelinux or MandrakeLinux . mandriva - Mandriva Linux. Do not use mandrivalinux . mes - Mandriva Enterprise Server. Do not use mandrivaent or mandrivaES . msdos - Microsoft Disc Operating System. Do not use ms-dos . netbsd - NetBSD. Do not use NetBSD or org.netbsd . netware - Novell NetWare. Do not use novell or NetWare . openbsd - OpenBSD. Do not use OpenBSD or org.openbsd . opensolaris - OpenSolaris. Do not use OpenSolaris or org.opensolaris . opensuse - openSUSE. Do not use suse , SuSE , or org.opensuse . rhel - Red Hat Enterprise Linux. Do not use redhat , RedHat , or com.redhat . sled - SUSE Linux Enterprise Desktop. Do not use com.suse . ubuntu - Ubuntu. Do not use Ubuntu , com.ubuntu , org.ubuntu , or canonical . windows - Microsoft Windows. Do not use com.microsoft.server . All os_version The operating system version as specified by the distributor. Version number (for example, "11.10") All ramdisk_id The ID of image stored in the Image Service that should be used as the ramdisk when booting an AMI-style image. Valid image ID All vm_mode The virtual machine mode. This represents the host/guest ABI (application binary interface) used for the virtual machine. hvm -Fully virtualized. This is the mode used by QEMU and KVM. libvirt API driver hw_disk_bus Specifies the type of disk controller to attach disk devices to. scsi , virtio , ide , or usb . Note that if using iscsi , the hw_scsi_model needs to be set to virtio-scsi . libvirt API driver hw_cdrom_bus Specifies the type of disk controller to attach CD-ROM devices to. scsi , virtio , ide , or usb . If you specify iscsi , you must set the hw_scsi_model parameter to virtio-scsi . libvirt API driver hw_numa_nodes Number of NUMA nodes to expose to the instance (does not override flavor definition). Integer. libvirt API driver hw_numa_cpus.0 Mapping of vCPUs N-M to NUMA node 0 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_cpus.1 Mapping of vCPUs N-M to NUMA node 1 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_mem.0 Mapping N MB of RAM to NUMA node 0 (does not override flavor definition). Integer libvirt API driver hw_numa_mem.1 Mapping N MB of RAM to NUMA node 1 (does not override flavor definition). Integer libvirt API driver hw_qemu_guest_agent Guest agent support. If set to yes , and if qemu-ga is also installed, file systems can be quiesced (frozen) and snapshots created automatically. yes / no libvirt API driver hw_rng_model Adds a random number generator (RNG) device to instances launched with this image. The instance flavor enables the RNG device by default. To disable the RNG device, the cloud administrator must set hw_rng:allowed to False on the flavor. The default entropy source is /dev/random . To specify a hardware RNG device, set rng_dev_path to /dev/hwrng in your Compute environment file. virtio , or other supported device. libvirt API driver hw_scsi_model Enables the use of VirtIO SCSI (virtio-scsi) to provide block device access for compute instances; by default, instances use VirtIO Block (virtio-blk). VirtIO SCSI is a para-virtualized SCSI controller device that provides improved scalability and performance, and supports advanced SCSI hardware. virtio-scsi libvirt API driver hw_video_model The video device driver for the display device to use in virtual machine instances. Set to one of the following values to specify the supported driver to use: virtio - Recommended Driver for the virtual machine display device, supported by most architectures. The VirtIO GPU driver is included in RHEL-7 and later, and Linux kernel versions 4.4 and later. If an instance kernel has the VirtIO GPU driver, then the instance can use all the VirtIO GPU features. If an instance kernel does not have the VirtIO GPU driver, the VirtIO GPU device gracefully falls back to VGA compatibility mode, which provides a working display for the instance. qxl - Deprecated Driver for Spice or noVNC environments that is no longer maintained. cirrus - Legacy driver. vga - Use this driver for IBM Power environments. gop - Not supported for QEMU/KVM environments. xen - Not supported for KVM environments. vmvga - Legacy driver, do not use. none - Use this value to disable emulated graphics or video in virtual GPU (vGPU) instances where the driver is configured separately. libvirt API driver hw_video_ram Maximum RAM for the video image. Used only if a hw_video:ram_max_mb value has been set in the flavor's extra_specs and that value is higher than the value set in hw_video_ram . Integer in MB (for example, 64 ) libvirt API driver hw_watchdog_action Enables a virtual hardware watchdog device that carries out the specified action if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. disabled-The device is not attached. Allows the user to disable the watchdog for the image, even if it has been enabled using the image's flavor. The default value for this parameter is disabled. reset-Forcefully reset the guest. poweroff-Forcefully power off the guest. pause-Pause the guest. none-Only enable the watchdog; do nothing if the server hangs. libvirt API driver os_command_line The kernel command line to be used by the libvirt driver, instead of the default. For Linux Containers (LXC), the value is used as arguments for initialization. This key is valid only for Amazon kernel, ramdisk, or machine images (aki, ari, or ami). libvirt API driver and VMware API driver hw_vif_model Specifies the model of virtual network interface device to use. The valid options depend on the configured hypervisor. KVM and QEMU: e1000, ne2k_pci, pcnet, rtl8139, and virtio. VMware: e1000, e1000e, VirtualE1000, VirtualE1000e, VirtualPCNet32, VirtualSriovEthernetCard, and VirtualVmxnet. Xen: e1000, netfront, ne2k_pci, pcnet, and rtl8139. VMware API driver vmware_adaptertype The virtual SCSI or IDE controller used by the hypervisor. lsiLogic , busLogic , or ide VMware API driver vmware_ostype A VMware GuestID which describes the operating system installed in the image. This value is passed to the hypervisor when creating a virtual machine. If not specified, the key defaults to otherGuest . For more information, see Images with VMware vSphere . VMware API driver vmware_image_version Currently unused. 1 XenAPI driver auto_disk_config If true, the root partition on the disk is automatically resized before the instance boots. This value is only taken into account by the Compute service when using a Xen-based hypervisor with the XenAPI driver. The Compute service will only attempt to resize if there is a single partition on the image, and only if the partition is in ext3 or ext4 format. true / false libvirt API driver and XenAPI driver os_type The operating system installed on the image. The XenAPI driver contains logic that takes different actions depending on the value of the os_type parameter of the image. For example, for os_type=windows images, it creates a FAT32-based swap partition instead of a Linux swap partition, and it limits the injected host name to less than 16 characters. linux or windows
[ "glance image-update IMG-UUID --property architecture=x86_64" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_images/appx-image-config-parameters
Chapter 12. OSGi Services
Chapter 12. OSGi Services Abstract The OSGi core framework defines the OSGi Service Layer , which provides a simple mechanism for bundles to interact by registering Java objects as services in the OSGi service registry . One of the strengths of the OSGi service model is that any Java object can be offered as a service: there are no particular constraints, inheritance rules, or annotations that must be applied to the service class. This chapter describes how to deploy an OSGi service using the OSGi Blueprint container . 12.1. The Blueprint Container Abstract The Blueprint container is a dependency injection framework that simplifies interaction with the OSGi container. The Blueprint container supports a configuration-based approach to using the OSGi service registry-for example, providing standard XML elements to import and export OSGi services. 12.1.1. Blueprint Configuration Location of Blueprint files in a JAR file Relative to the root of the bundle JAR file, the standard location for Blueprint configuration files is the following directory: Any files with the suffix, .xml , under this directory are interpreted as Blueprint configuration files; in other words, any files that match the pattern, OSGI-INF/blueprint/*.xml . Location of Blueprint files in a Maven project In the context of a Maven project, ProjectDir , the standard location for Blueprint configuration files is the following directory: Blueprint namespace and root element Blueprint configuration elements are associated with the following XML namespace: The root element for Blueprint configuration is blueprint , so a Blueprint XML configuration file normally has the following outline form: Note In the blueprint root element, there is no need to specify the location of the Blueprint schema using an xsi:schemaLocation attribute, because the schema location is already known to the Blueprint framework. Blueprint Manifest configuration Some aspects of Blueprint configuration are controlled by headers in the JAR's manifest file, META-INF/MANIFEST.MF , as follows: Custom Blueprint file locations . Mandatory dependencies . Custom Blueprint file locations If you need to place your Blueprint configuration files in a non-standard location (that is, somewhere other than OSGI-INF/blueprint/*.xml ), you can specify a comma-separated list of alternative locations in the Bundle-Blueprint header in the manifest file-for example: Mandatory dependencies Dependencies on an OSGi service are mandatory by default (although this can be changed by setting the availability attribute to optional on a reference element or a reference-list element). Declaring a dependency to be mandatory means that the bundle cannot function properly without that dependency and the dependency must be available at all times. Normally, while a Blueprint container is initializing, it passes through a grace period , during which time it attempts to resolve all mandatory dependencies. If the mandatory dependencies cannot be resolved in this time (the default timeout is 5 minutes), container initialization is aborted and the bundle is not started. The following settings can be appended to the Bundle-SymbolicName manifest header to configure the grace period: blueprint.graceperiod If true (the default), the grace period is enabled and the Blueprint container waits for mandatory dependencies to be resolved during initialization; if false , the grace period is skipped and the container does not check whether the mandatory dependencies are resolved. blueprint.timeout Specifies the grace period timeout in milliseconds. The default is 300000 (5 minutes). For example, to enable a grace period of 10 seconds, you could define the following Bundle-SymbolicName header in the manifest file: The value of the Bundle-SymbolicName header is a semi-colon separated list, where the first item is the actual bundle symbolic name, the second item, blueprint.graceperiod:=true , enables the grace period and the third item, blueprint.timeout:= 10000 , specifies a 10 second timeout. 12.1.2. Defining a Service Bean Overview The Blueprint container enables you to instantiate Java classes using a bean element. You can create all of your main application objects this way. In particular, you can use the bean element to create a Java object that represents an OSGi service instance. Blueprint bean element The Blueprint bean element is defined in the Blueprint schema namespace, http://www.osgi.org/xmlns/blueprint/v1.0.0 . Sample beans The following example shows how to create a few different types of bean using Blueprint's bean element: Where the Account class referenced by the last bean example could be defined as follows: References For more details on defining Blueprint beans, consult the following references: Spring Dynamic Modules Reference Guide v2.0, Blueprint chapter . Section 121 Blueprint Container Specification , from the OSGi Compendium Services R4.2 specification. 12.1.3. Using properties to configure Blueprint Overview This section describes how to configure Blueprint using properties held in a file which is outside the Camel context. Configuring Blueprint beans Blueprint beans can be configured by using variables that can be substitued with properties from an external file. You need to declare the ext namespace and add the property placeholder bean in your Blueprint xml. Use the Property-Placeholder bean to declare the location of your properties file to Blueprint. The specification of property-placeholder configuration options can be found at http://aries.apache.org/schemas/blueprint-ext/blueprint-ext.xsd . 12.2. Exporting a Service Overview This section describes how to export a Java object to the OSGi service registry, thus making it accessible as a service to other bundles in the OSGi container. Exporting with a single interface To export a service to the OSGi service registry under a single interface name, define a service element that references the relevant service bean, using the ref attribute, and specifies the published interface, using the interface attribute. For example, you could export an instance of the SavingsAccountImpl class under the org.fusesource.example.Account interface name using the Blueprint configuration code shown in Example 12.1, "Sample Service Export with a Single Interface" . Example 12.1. Sample Service Export with a Single Interface Where the ref attribute specifies the ID of the corresponding bean instance and the interface attribute specifies the name of the public Java interface under which the service is registered in the OSGi service registry. The classes and interfaces used in this example are shown in Example 12.2, "Sample Account Classes and Interfaces" Example 12.2. Sample Account Classes and Interfaces Exporting with multiple interfaces To export a service to the OSGi service registry under multiple interface names, define a service element that references the relevant service bean, using the ref attribute, and specifies the published interfaces, using the interfaces child element. For example, you could export an instance of the SavingsAccountImpl class under the list of public Java interfaces, org.fusesource.example.Account and org.fusesource.example.SavingsAccount , using the following Blueprint configuration code: Note The interface attribute and the interfaces element cannot be used simultaneously in the same service element. You must use either one or the other. Exporting with auto-export If you want to export a service to the OSGi service registry under all of its implemented public Java interfaces, there is an easy way of accomplishing this using the auto-export attribute. For example, to export an instance of the SavingsAccountImpl class under all of its implemented public interfaces, use the following Blueprint configuration code: Where the interfaces value of the auto-export attribute indicates that Blueprint should register all of the public interfaces implemented by SavingsAccountImpl . The auto-export attribute can have the following valid values: disabled Disables auto-export. This is the default. interfaces Registers the service under all of its implemented public Java interfaces. class-hierarchy Registers the service under its own type (class) and under all super-types (super-classes), except for the Object class. all-classes Like the class-hierarchy option, but including all of the implemented public Java interfaces as well. Setting service properties The OSGi service registry also allows you to associate service properties with a registered service. Clients of the service can then use the service properties to search for or filter services. To associate service properties with an exported service, add a service-properties child element that contains one or more beans:entry elements (one beans:entry element for each service property). For example, to associate the bank.name string property with a savings account service, you could use the following Blueprint configuration: Where the bank.name string property has the value, HighStreetBank . It is possible to define service properties of type other than string: that is, primitive types, arrays, and collections are also supported. For details of how to define these types, see Controlling the Set of Advertised Properties . in the Spring Reference Guide . Note The entry element ought to belong to the Blueprint namespace. The use of the beans:entry element in Spring's implementation of Blueprint is non-standard. Default service properties There are two service properties that might be set automatically when you export a service using the service element, as follows: osgi.service.blueprint.compname -is always set to the id of the service's bean element, unless the bean is inlined (that is, the bean is defined as a child element of the service element). Inlined beans are always anonymous. service.ranking -is automatically set, if the ranking attribute is non-zero. Specifying a ranking attribute If a bundle looks up a service in the service registry and finds more than one matching service, you can use ranking to determine which of the services is returned. The rule is that, whenever a lookup matches multiple services, the service with the highest rank is returned. The service rank can be any non-negative integer, with 0 being the default. You can specify the service ranking by setting the ranking attribute on the service element-for example: Specifying a registration listener If you want to keep track of service registration and unregistration events, you can define a registration listener callback bean that receives registration and unregistration event notifications. To define a registration listener, add a registration-listener child element to a service element. For example, the following Blueprint configuration defines a listener bean, listenerBean , which is referenced by a registration-listener element, so that the listener bean receives callbacks whenever an Account service is registered or unregistered: Where the registration-listener element's ref attribute references the id of the listener bean, the registration-method attribute specifies the name of the listener method that receives the registration callback, and unregistration-method attribute specifies the name of the listener method that receives the unregistration callback. The following Java code shows a sample definition of the Listener class that receives notifications of registration and unregistration events: The method names, register and unregister , are specified by the registration-method and unregistration-method attributes respectively. The signatures of these methods must conform to the following syntax: First method argument -any type T that is assignable from the service object's type. In other words, any supertype class of the service class or any interface implemented by the service class. This argument contains the service instance, unless the service bean declares the scope to be prototype , in which case this argument is null (when the scope is prototype , no service instance is available at registration time). Second method argument -must be of either java.util.Map type or java.util.Dictionary type. This map contains the service properties associated with this service registration. 12.3. Importing a Service Overview This section describes how to obtain and use references to OSGi services that have been exported to the OSGi service registry. You can use either the reference element or the reference-list element to import an OSGi service. The reference element is suitable for accessing stateless services, while the reference-list element is suitable for accessing stateful services. Managing service references The following models for obtaining OSGi services references are supported: Reference manager . Reference list manager . Reference manager A reference manager instance is created by the Blueprint reference element. This element returns a single service reference and is the preferred approach for accessing stateless services. Figure 12.1, "Reference to Stateless Service" shows an overview of the model for accessing a stateless service using the reference manager. Figure 12.1. Reference to Stateless Service Beans in the client Blueprint container get injected with a proxy object (the provided object ), which is backed by a service object (the backing service ) from the OSGi service registry. This model explicitly takes advantage of the fact that stateless services are interchangeable, in the following ways: If multiple services instances are found that match the criteria in the reference element, the reference manager can arbitrarily choose one of them as the backing instance (because they are interchangeable). If the backing service disappears, the reference manager can immediately switch to using one of the other available services of the same type. Hence, there is no guarantee, from one method invocation to the , that the proxy remains connected to the same backing service. The contract between the client and the backing service is thus stateless , and the client must not assume that it is always talking to the same service instance. If no matching service instances are available, the proxy will wait for a certain length of time before throwing the ServiceUnavailable exception. The length of the timeout is configurable by setting the timeout attribute on the reference element. Reference list manager A reference list manager instance is created by the Blueprint reference-list element. This element returns a list of service references and is the preferred approach for accessing stateful services. Figure 12.2, "List of References to Stateful Services" shows an overview of the model for accessing a stateful service using the reference list manager. Figure 12.2. List of References to Stateful Services Beans in the client Blueprint container get injected with a java.util.List object (the provided object ), which contains a list of proxy objects. Each proxy is backed by a unique service instance in the OSGi service registry. Unlike the stateless model, backing services are not considered to be interchangeable here. In fact, the lifecycle of each proxy in the list is tightly linked to the lifecycle of the corresponding backing service: when a service gets registered in the OSGi registry, a corresponding proxy is synchronously created and added to the proxy list; and when a service gets unregistered from the OSGi registry, the corresponding proxy is synchronously removed from the proxy list. The contract between a proxy and its backing service is thus stateful , and the client may assume when it invokes methods on a particular proxy, that it is always communicating with the same backing service. It could happen, however, that the backing service becomes unavailable, in which case the proxy becomes stale. Any attempt to invoke a method on a stale proxy will generate the ServiceUnavailable exception. Matching by interface (stateless) The simplest way to obtain a stateles service reference is by specifying the interface to match, using the interface attribute on the reference element. The service is deemed to match, if the interface attribute value is a super-type of the service or if the attribute value is a Java interface implemented by the service (the interface attribute can specify either a Java class or a Java interface). For example, to reference a stateless SavingsAccount service (see Example 12.1, "Sample Service Export with a Single Interface" ), define a reference element as follows: Where the reference element creates a reference manager bean with the ID, savingsRef . To use the referenced service, inject the savingsRef bean into one of your client classes, as shown. The bean property injected into the client class can be any type that is assignable from SavingsAccount . For example, you could define the Client class as follows: Matching by interface (stateful) The simplest way to obtain a stateful service reference is by specifying the interface to match, using the interface attribute on the reference-list element. The reference list manager then obtains a list of all the services, whose interface attribute value is either a super-type of the service or a Java interface implemented by the service (the interface attribute can specify either a Java class or a Java interface). For example, to reference a stateful SavingsAccount service (see Example 12.1, "Sample Service Export with a Single Interface" ), define a reference-list element as follows: Where the reference-list element creates a reference list manager bean with the ID, savingsListRef . To use the referenced service list, inject the savingsListRef bean reference into one of your client classes, as shown. By default, the savingsAccountList bean property is a list of service objects (for example, java.util.List<SavingsAccount> ). You could define the client class as follows: Matching by interface and component name To match both the interface and the component name (bean ID) of a stateless service, specify both the interface attribute and the component-name attribute on the reference element, as follows: To match both the interface and the component name (bean ID) of a stateful service, specify both the interface attribute and the component-name attribute on the reference-list element, as follows: Matching service properties with a filter You can select services by matching service properties against a filter. The filter is specified using the filter attribute on the reference element or on the reference-list element. The value of the filter attribute must be an LDAP filter expression . For example, to define a filter that matches when the bank.name service property equals HighStreetBank , you could use the following LDAP filter expression: To match two service property values, you can use & conjunction, which combines expressions with a logical and .For example, to require that the foo property is equal to FooValue and the bar property is equal to BarValue , you could use the following LDAP filter expression: For the complete syntax of LDAP filter expressions, see section 3.2.7 of the OSGi Core Specification . Filters can also be combined with the interface and component-name settings, in which case all of the specified conditions are required to match. For example, to match a stateless service of SavingsAccount type, with a bank.name service property equal to HighStreetBank , you could define a reference element as follows: To match a stateful service of SavingsAccount type, with a bank.name service property equal to HighStreetBank , you could define a reference-list element as follows: Specifying whether mandatory or optional By default, a reference to an OSGi service is assumed to be mandatory (see Mandatory dependencies ). It is possible to customize the dependency behavior of a reference element or a reference-list element by setting the availability attribute on the element. There are two possible values of the availability attribute: mandatory (the default), means that the dependency must be resolved during a normal Blueprint container initialization optional , means that the dependency need not be resolved during initialization. The following example of a reference element shows how to declare explicitly that the reference is a mandatory dependency: Specifying a reference listener To cope with the dynamic nature of the OSGi environment-for example, if you have declared some of your service references to have optional availability-it is often useful to track when a backing service gets bound to the registry and when it gets unbound from the registry. To receive notifications of service binding and unbinding events, you can define a reference-listener element as the child of either the reference element or the reference-list element. For example, the following Blueprint configuration shows how to define a reference listener as a child of the reference manager with the ID, savingsRef : The preceding configuration registers an instance of org.fusesource.example.client.Listener type as a callback that listens for bind and unbind events. Events are generated whenever the savingsRef reference manager's backing service binds or unbinds. The following example shows a sample implementation of the Listener class: The method names, onBind and onUnbind , are specified by the bind-method and unbind-method attributes respectively. Both of these callback methods take an org.osgi.framework.ServiceReference argument. 12.4. Publishing an OSGi Service 12.4.1. Overview This section explains how to generate, build, and deploy a simple OSGi service in the OSGi container. The service is a simple Hello World Java class and the OSGi configuration is defined using a Blueprint configuration file. 12.4.2. Prerequisites In order to generate a project using the Maven Quickstart archetype, you must have the following prerequisites: Maven installation -Maven is a free, open source build tool from Apache. You can download the latest version from http://maven.apache.org/download.html (minimum is 2.0.9). Internet connection -whilst performing a build, Maven dynamically searches external repositories and downloads the required artifacts on the fly. In order for this to work, your build machine must be connected to the Internet. 12.4.3. Generating a Maven project The maven-archetype-quickstart archetype creates a generic Maven project, which you can then customize for whatever purpose you like. To generate a Maven project with the coordinates, org.fusesource.example:osgi-service , enter the following command: The result of this command is a directory, ProjectDir /osgi-service , containing the files for the generated project. Note Be careful not to choose a group ID for your artifact that clashes with the group ID of an existing product! This could lead to clashes between your project's packages and the packages from the existing product (because the group ID is typically used as the root of a project's Java package names). 12.4.4. Customizing the POM file You must customize the POM file in order to generate an OSGi bundle, as follows: Follow the POM customization steps described in Section 5.1, "Generating a Bundle Project" . In the configuration of the Maven bundle plug-in, modify the bundle instructions to export the org.fusesource.example.service package, as follows: 12.4.5. Writing the service interface Create the ProjectDir /osgi-service/src/main/java/org/fusesource/example/service sub-directory. In this directory, use your favorite text editor to create the file, HelloWorldSvc.java , and add the code from Example 12.3, "The HelloWorldSvc Interface" to it. Example 12.3. The HelloWorldSvc Interface 12.4.6. Writing the service class Create the ProjectDir /osgi-service/src/main/java/org/fusesource/example/service/impl sub-directory. In this directory, use your favorite text editor to create the file, HelloWorldSvcImpl.java , and add the code from Example 12.4, "The HelloWorldSvcImpl Class" to it. Example 12.4. The HelloWorldSvcImpl Class 12.4.7. Writing the Blueprint file The Blueprint configuration file is an XML file stored under the OSGI-INF/blueprint directory on the class path. To add a Blueprint file to your project, first create the following sub-directories: Where the src/main/resources is the standard Maven location for all JAR resources. Resource files under this directory will automatically be packaged in the root scope of the generated bundle JAR. Example 12.5, "Blueprint File for Exporting a Service" shows a sample Blueprint file that creates a HelloWorldSvc bean, using the bean element, and then exports the bean as an OSGi service, using the service element. Under the ProjectDir /osgi-service/src/main/resources/OSGI-INF/blueprint directory, use your favorite text editor to create the file, config.xml , and add the XML code from Example 12.5, "Blueprint File for Exporting a Service" . Example 12.5. Blueprint File for Exporting a Service 12.4.8. Running the service bundle To install and run the osgi-service project, perform the following steps: Build the project -open a command prompt and change directory to ProjectDir /osgi-service . Use Maven to build the demonstration by entering the following command: If this command runs successfully, the ProjectDir /osgi-service/target directory should contain the bundle file, osgi-service-1.0-SNAPSHOT.jar . Install and start the osgi-service bundle -at the Red Hat Fuse console, enter the following command: Where ProjectDir is the directory containing your Maven projects and the -s flag directs the container to start the bundle right away. For example, if your project directory is C:\Projects on a Windows machine, you would enter the following command: Note On Windows machines, be careful how you format the file URL-for details of the syntax understood by the file URL handler, see Section 15.1, "File URL Handler" . Check that the service has been created -to check that the bundle has started successfully, enter the following Red Hat Fuse console command: Somewhere in this listing, you should see a line for the osgi-service bundle, for example: 12.5. Accessing an OSGi Service 12.5.1. Overview This section explains how to generate, build, and deploy a simple OSGi client in the OSGi container. The client finds the simple Hello World service in the OSGi registry and invokes the sayHello() method on it. 12.5.2. Prerequisites In order to generate a project using the Maven Quickstart archetype, you must have the following prerequisites: Maven installation -Maven is a free, open source build tool from Apache. You can download the latest version from http://maven.apache.org/download.html (minimum is 2.0.9). Internet connection -whilst performing a build, Maven dynamically searches external repositories and downloads the required artifacts on the fly. In order for this to work, your build machine must be connected to the Internet. 12.5.3. Generating a Maven project The maven-archetype-quickstart archetype creates a generic Maven project, which you can then customize for whatever purpose you like. To generate a Maven project with the coordinates, org.fusesource.example:osgi-client , enter the following command: The result of this command is a directory, ProjectDir /osgi-client , containing the files for the generated project. Note Be careful not to choose a group ID for your artifact that clashes with the group ID of an existing product! This could lead to clashes between your project's packages and the packages from the existing product (because the group ID is typically used as the root of a project's Java package names). 12.5.4. Customizing the POM file You must customize the POM file in order to generate an OSGi bundle, as follows: Follow the POM customization steps described in Section 5.1, "Generating a Bundle Project" . Because the client uses the HelloWorldSvc Java interface, which is defined in the osgi-service bundle, it is necessary to add a Maven dependency on the osgi-service bundle. Assuming that the Maven coordinates of the osgi-service bundle are org.fusesource.example:osgi-service:1.0-SNAPSHOT , you should add the following dependency to the client's POM file: 12.5.5. Writing the Blueprint file To add a Blueprint file to your client project, first create the following sub-directories: Under the ProjectDir /osgi-client/src/main/resources/OSGI-INF/blueprint directory, use your favorite text editor to create the file, config.xml , and add the XML code from Example 12.6, "Blueprint File for Importing a Service" . Example 12.6. Blueprint File for Importing a Service Where the reference element creates a reference manager that finds a service of HelloWorldSvc type in the OSGi registry. The bean element creates an instance of the Client class and injects the service reference as the bean property, helloWorldSvc . In addition, the init-method attribute specifies that the Client.init() method is called during the bean initialization phase (that is, after the service reference has been injected into the client bean). 12.5.6. Writing the client class Under the ProjectDir /osgi-client/src/main/java/org/fusesource/example/client directory, use your favorite text editor to create the file, Client.java , and add the Java code from Example 12.7, "The Client Class" . Example 12.7. The Client Class The Client class defines a getter and a setter method for the helloWorldSvc bean property, which enables it to receive the reference to the Hello World service by injection. The init() method is called during the bean initialization phase, after property injection, which means that it is normally possible to invoke the Hello World service within the scope of this method. 12.5.7. Running the client bundle To install and run the osgi-client project, perform the following steps: Build the project -open a command prompt and change directory to ProjectDir /osgi-client . Use Maven to build the demonstration by entering the following command: If this command runs successfully, the ProjectDir /osgi-client/target directory should contain the bundle file, osgi-client-1.0-SNAPSHOT.jar . Install and start the osgi-service bundle -at the Red Hat Fuse console, enter the following command: Where ProjectDir is the directory containing your Maven projects and the -s flag directs the container to start the bundle right away. For example, if your project directory is C:\Projects on a Windows machine, you would enter the following command: Note On Windows machines, be careful how you format the file URL-for details of the syntax understood by the file URL handler, see Section 15.1, "File URL Handler" . Client output -f the client bundle is started successfully, you should immediately see output like the following in the console: 12.6. Integration with Apache Camel 12.6.1. Overview Apache Camel provides a simple way to invoke OSGi services using the Bean language. This feature is automatically available whenever a Apache Camel application is deployed into an OSGi container and requires no special configuration. 12.6.2. Registry chaining When a Apache Camel route is deployed into the OSGi container, the CamelContext automatically sets up a registry chain for resolving bean instances: the registry chain consists of the OSGi registry, followed by the Blueprint registry. Now, if you try to reference a particular bean class or bean instance, the registry resolves the bean as follows: Look up the bean in the OSGi registry first. If a class name is specified, try to match this with the interface or class of an OSGi service. If no match is found in the OSGi registry, fall back on the Blueprint registry. 12.6.3. Sample OSGi service interface Consider the OSGi service defined by the following Java interface, which defines the single method, getGreeting() : 12.6.4. Sample service export When defining the bundle that implements the HelloBoston OSGi service, you could use the following Blueprint configuration to export the service: Where it is assumed that the HelloBoston interface is implemented by the HelloBostonImpl class (not shown). 12.6.5. Invoking the OSGi service from Java DSL After you have deployed the bundle containing the HelloBoston OSGi service, you can invoke the service from a Apache Camel application using the Java DSL. In the Java DSL, you invoke the OSGi service through the Bean language, as follows: In the bean command, the first argument is the OSGi interface or class, which must match the interface exported from the OSGi service bundle. The second argument is the name of the bean method you want to invoke. For full details of the bean command syntax, see Apache Camel Development Guide Bean Integration . Note When you use this approach, the OSGi service is implicitly imported. It is not necessary to import the OSGi service explicitly in this case. 12.6.6. Invoking the OSGi service from XML DSL In the XML DSL, you can also use the Bean language to invoke the HelloBoston OSGi service, but the syntax is slightly different. In the XML DSL, you invoke the OSGi service through the Bean language, using the method element, as follows: Note When you use this approach, the OSGi service is implicitly imported. It is not necessary to import the OSGi service explicitly in this case.
[ "OSGI-INF/blueprint", "ProjectDir /src/main/resources/OSGI-INF/blueprint", "http://www.osgi.org/xmlns/blueprint/v1.0.0", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> </blueprint>", "Bundle-Blueprint: lib/account.xml, security.bp, cnf/*.xml", "Bundle-SymbolicName: org.fusesource.example.osgi-client; blueprint.graceperiod:=true; blueprint.timeout:= 10000", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <bean id=\"label\" class=\"java.lang.String\"> <argument value=\"LABEL_VALUE\"/> </bean> <bean id=\"myList\" class=\"java.util.ArrayList\"> <argument type=\"int\" value=\"10\"/> </bean> <bean id=\"account\" class=\"org.fusesource.example.Account\"> <property name=\"accountName\" value=\"john.doe\"/> <property name=\"balance\" value=\"10000\"/> </bean> </blueprint>", "package org.fusesource.example; public class Account { private String accountName; private int balance; public Account () { } public void setAccountName(String name) { this.accountName = name; } public void setBalance(int bal) { this.balance = bal; } }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.2.0\"> <ext:property-placeholder> <ext:location>file:etc/ldap.properties</ext:location> </ext:property-placeholder> <bean ...> <property name=\"myProperty\" value=\"USD{myProperty}\" /> </bean> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <bean id=\"savings\" class=\"org.fusesource.example.SavingsAccountImpl\"/> <service ref=\"savings\" interface=\"org.fusesource.example.Account\"/> </blueprint>", "package org.fusesource.example public interface Account { ... } public interface SavingsAccount { ... } public interface CheckingAccount { ... } public class SavingsAccountImpl implements SavingsAccount { } public class CheckingAccountImpl implements CheckingAccount { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <bean id=\"savings\" class=\"org.fusesource.example.SavingsAccountImpl\"/> <service ref=\"savings\"> <interfaces> <value>org.fusesource.example.Account</value> <value>org.fusesource.example.SavingsAccount</value> </interfaces> </service> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <bean id=\"savings\" class=\"org.fusesource.example.SavingsAccountImpl\"/> <service ref=\"savings\" auto-export=\"interfaces\"/> </blueprint>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:beans=\"http://www.springframework.org/schema/beans\" ...> <service ref=\"savings\" auto-export=\"interfaces\"> <service-properties> <beans:entry key=\"bank.name\" value=\"HighStreetBank\"/> </service-properties> </service> </blueprint>", "<service ref=\"savings\" interface=\"org.fusesource.example.Account\" ranking=\"10\" />", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" ...> <bean id=\"listenerBean\" class=\"org.fusesource.example.Listener\"/> <service ref=\"savings\" auto-export=\"interfaces\"> <registration-listener ref=\"listenerBean\" registration-method=\"register\" unregistration-method=\"unregister\"/> </service> </blueprint>", "package org.fusesource.example; public class Listener { public void register(Account service, java.util.Map serviceProperties) { } public void unregister(Account service, java.util.Map serviceProperties) { } }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <reference id=\"savingsRef\" interface=\"org.fusesource.example.SavingsAccount\"/> <bean id=\"client\" class=\"org.fusesource.example.client.Client\"> <property name=\"savingsAccount\" ref=\"savingsRef\"/> </bean> </blueprint>", "package org.fusesource.example.client; import org.fusesource.example.SavingsAccount; public class Client { SavingsAccount savingsAccount; // Bean properties public SavingsAccount getSavingsAccount() { return savingsAccount; } public void setSavingsAccount(SavingsAccount savingsAccount) { this.savingsAccount = savingsAccount; } }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <reference-list id=\"savingsListRef\" interface=\"org.fusesource.example.SavingsAccount\"/> <bean id=\"client\" class=\"org.fusesource.example.client.Client\"> <property name=\"savingsAccountList\" ref=\"savingsListRef\"/> </bean> </blueprint>", "package org.fusesource.example.client; import org.fusesource.example.SavingsAccount; public class Client { java.util.List<SavingsAccount> accountList; // Bean properties public java.util.List<SavingsAccount> getSavingsAccountList() { return accountList; } public void setSavingsAccountList( java.util.List<SavingsAccount> accountList ) { this.accountList = accountList; } }", "<reference id=\"savingsRef\" interface=\"org.fusesource.example.SavingsAccount\" component-name=\"savings\"/>", "<reference-list id=\"savingsRef\" interface=\"org.fusesource.example.SavingsAccount\" component-name=\"savings\"/>", "(bank.name=HighStreetBank)", "(&(foo=FooValue)(bar=BarValue))", "<reference id=\"savingsRef\" interface=\"org.fusesource.example.SavingsAccount\" filter=\"(bank.name=HighStreetBank)\"/>", "<reference-list id=\"savingsRef\" interface=\"org.fusesource.example.SavingsAccount\" filter=\"(bank.name=HighStreetBank)\"/>", "<reference id=\"savingsRef\" interface=\"org.fusesource.example.SavingsAccount\" availability=\"mandatory\"/>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <reference id=\"savingsRef\" interface=\"org.fusesource.example.SavingsAccount\" > <reference-listener bind-method=\"onBind\" unbind-method=\"onUnbind\"> <bean class=\"org.fusesource.example.client.Listener\"/> </reference-listener> </reference> <bean id=\"client\" class=\"org.fusesource.example.client.Client\"> <property name=\"savingsAcc\" ref=\"savingsRef\"/> </bean> </blueprint>", "package org.fusesource.example.client; import org.osgi.framework.ServiceReference; public class Listener { public void onBind(ServiceReference ref) { System.out.println(\"Bound service: \" + ref); } public void onUnbind(ServiceReference ref) { System.out.println(\"Unbound service: \" + ref); } }", "mvn archetype:create -DarchetypeArtifactId=maven-archetype-quickstart -DgroupId=org.fusesource.example -DartifactId=osgi-service", "<project ... > <build> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>USD{pom.groupId}.USD{pom.artifactId}</Bundle-SymbolicName> <Export-Package>org.fusesource.example.service</Export-Package> </instructions> </configuration> </plugin> </plugins> </build> </project>", "package org.fusesource.example.service; public interface HelloWorldSvc { public void sayHello(); }", "package org.fusesource.example.service.impl; import org.fusesource.example.service.HelloWorldSvc; public class HelloWorldSvcImpl implements HelloWorldSvc { public void sayHello() { System.out.println( \"Hello World!\" ); } }", "ProjectDir /osgi-service/src/main/resources ProjectDir /osgi-service/src/main/resources/OSGI-INF ProjectDir /osgi-service/src/main/resources/OSGI-INF/blueprint", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <bean id=\"hello\" class=\"org.fusesource.example.service.impl.HelloWorldSvcImpl\"/> <service ref=\"hello\" interface=\"org.fusesource.example.service.HelloWorldSvc\"/> </blueprint>", "mvn install", "Jkaraf@root()> bundle:install -s file: ProjectDir /osgi-service/target/osgi-service-1.0-SNAPSHOT.jar", "karaf@root()> bundle:install -s file:C:/Projects/osgi-service/target/osgi-service-1.0-SNAPSHOT.jar", "karaf@root()> bundle:list", "[ 236] [Active ] [Created ] [ ] [ 60] osgi-service (1.0.0.SNAPSHOT)", "mvn archetype:create -DarchetypeArtifactId=maven-archetype-quickstart -DgroupId=org.fusesource.example -DartifactId=osgi-client", "<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>osgi-service</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>", "ProjectDir /osgi-client/src/main/resources ProjectDir /osgi-client/src/main/resources/OSGI-INF ProjectDir /osgi-client/src/main/resources/OSGI-INF/blueprint", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <reference id=\"helloWorld\" interface=\"org.fusesource.example.service.HelloWorldSvc\"/> <bean id=\"client\" class=\"org.fusesource.example.client.Client\" init-method=\"init\"> <property name=\"helloWorldSvc\" ref=\"helloWorld\"/> </bean> </blueprint>", "package org.fusesource.example.client; import org.fusesource.example.service.HelloWorldSvc; public class Client { HelloWorldSvc helloWorldSvc; // Bean properties public HelloWorldSvc getHelloWorldSvc() { return helloWorldSvc; } public void setHelloWorldSvc(HelloWorldSvc helloWorldSvc) { this.helloWorldSvc = helloWorldSvc; } public void init() { System.out.println(\"OSGi client started.\"); if (helloWorldSvc != null) { System.out.println(\"Calling sayHello()\"); helloWorldSvc.sayHello(); // Invoke the OSGi service! } } }", "mvn install", "karaf@root()> bundle:install -s file: ProjectDir /osgi-client/target/osgi-client-1.0-SNAPSHOT.jar", "karaf@root()> bundle:install -s file:C:/Projects/osgi-client/target/osgi-client-1.0-SNAPSHOT.jar", "Bundle ID: 239 OSGi client started. Calling sayHello() Hello World!", "package org.fusesource.example.hello.boston; public interface HelloBoston { public String getGreeting(); }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <bean id=\"hello\" class=\"org.fusesource.example.hello.boston.HelloBostonImpl\"/> <service ref=\"hello\" interface=\" org.fusesource.example.hello.boston.HelloBoston \"/> </blueprint>", "from(\"timer:foo?period=5000\") .bean(org.fusesource.example.hello.boston.HelloBoston.class, \"getGreeting\") .log(\"The message contains: USD{body}\")", "<beans ...> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"timer:foo?period=5000\"/> <setBody> <method ref=\"org.fusesource.example.hello.boston.HelloBoston\" method=\"getGreeting\"/> </setBody> <log message=\"The message contains: USD{body}\"/> </route> </camelContext> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/deploysimple
Chapter 62. JmxTransTemplate schema reference
Chapter 62. JmxTransTemplate schema reference Used in: JmxTransSpec Property Property type Description deployment DeploymentTemplate Template for JmxTrans Deployment . pod PodTemplate Template for JmxTrans Pods . container ContainerTemplate Template for JmxTrans container. serviceAccount ResourceTemplate Template for the JmxTrans service account.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-jmxtranstemplate-reference
Chapter 3. ClusterResourceQuota [quota.openshift.io/v1]
Chapter 3. ClusterResourceQuota [quota.openshift.io/v1] Description ClusterResourceQuota mirrors ResourceQuota at a cluster scope. This object is easily convertible to synthetic ResourceQuota object to allow quota evaluation re-use. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec defines the desired quota status object Status defines the actual enforced quota and its current usage 3.1.1. .spec Description Spec defines the desired quota Type object Required quota selector Property Type Description quota object Quota defines the desired quota selector object Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. 3.1.2. .spec.quota Description Quota defines the desired quota Type object Property Type Description hard integer-or-string hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector object scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 3.1.3. .spec.quota.scopeSelector Description scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. Type object Property Type Description matchExpressions array A list of scope selector requirements by scope of the resources. matchExpressions[] object A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. 3.1.4. .spec.quota.scopeSelector.matchExpressions Description A list of scope selector requirements by scope of the resources. Type array 3.1.5. .spec.quota.scopeSelector.matchExpressions[] Description A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values. Type object Required operator scopeName Property Type Description operator string Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. scopeName string The name of the scope that the selector applies to. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.6. .spec.selector Description Selector is the selector used to match projects. It should only select active projects on the scale of dozens (though it can select many more less active projects). These projects will contend on object creation through this resource. Type object Property Type Description annotations undefined (string) AnnotationSelector is used to select projects by annotation. labels `` LabelSelector is used to select projects by label. 3.1.7. .status Description Status defines the actual enforced quota and its current usage Type object Required total Property Type Description namespaces `` Namespaces slices the usage by project. This division allows for quick resolution of deletion reconciliation inside of a single project without requiring a recalculation across all projects. This can be used to pull the deltas for a given project. total object Total defines the actual enforced quota and its current usage across all projects 3.1.8. .status.total Description Total defines the actual enforced quota and its current usage across all projects Type object Property Type Description hard integer-or-string Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used integer-or-string Used is the current observed total usage of the resource in the namespace. 3.2. API endpoints The following API endpoints are available: /apis/quota.openshift.io/v1/clusterresourcequotas DELETE : delete collection of ClusterResourceQuota GET : list objects of kind ClusterResourceQuota POST : create a ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas GET : watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} DELETE : delete a ClusterResourceQuota GET : read the specified ClusterResourceQuota PATCH : partially update the specified ClusterResourceQuota PUT : replace the specified ClusterResourceQuota /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} GET : watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status GET : read status of the specified ClusterResourceQuota PATCH : partially update status of the specified ClusterResourceQuota PUT : replace status of the specified ClusterResourceQuota 3.2.1. /apis/quota.openshift.io/v1/clusterresourcequotas HTTP method DELETE Description delete collection of ClusterResourceQuota Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterResourceQuota Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuotaList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterResourceQuota Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 202 - Accepted ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.2. /apis/quota.openshift.io/v1/watch/clusterresourcequotas HTTP method GET Description watch individual changes to a list of ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/quota.openshift.io/v1/clusterresourcequotas/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method DELETE Description delete a ClusterResourceQuota Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterResourceQuota Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterResourceQuota Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterResourceQuota Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty 3.2.4. /apis/quota.openshift.io/v1/watch/clusterresourcequotas/{name} Table 3.16. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method GET Description watch changes to an object of kind ClusterResourceQuota. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/quota.openshift.io/v1/clusterresourcequotas/{name}/status Table 3.18. Global path parameters Parameter Type Description name string name of the ClusterResourceQuota HTTP method GET Description read status of the specified ClusterResourceQuota Table 3.19. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterResourceQuota Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterResourceQuota Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body ClusterResourceQuota schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK ClusterResourceQuota schema 201 - Created ClusterResourceQuota schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/schedule_and_quota_apis/clusterresourcequota-quota-openshift-io-v1
3.3. Putting the Configuration Together
3.3. Putting the Configuration Together After determining which of the preceding routing methods to use, the hardware should be connected together and configured. Important The network adapters on the LVS routers must be configured to access the same networks. For instance if eth0 connects to the public network and eth1 connects to the private network, then these same devices on the backup LVS router must connect to the same networks. Also the gateway listed in the first interface to come up at boot time is added to the routing table and subsequent gateways listed in other interfaces are ignored. This is especially important to consider when configuring the real servers. After connecting the hardware to the network, configure the network interfaces on the primary and backup LVS routers. This should be done by editing the network configuration files manually. For more information about working with network configuration files, see the Red Hat Enterprise Linux 7 Networking Guide . 3.3.1. General Load Balancer Networking Tips Configure the real IP addresses for both the public and private networks on the LVS routers before attempting to configure Load Balancer using Keepalived. The sections on each topology give example network addresses, but the actual network addresses are needed. Below are some useful commands for bringing up network interfaces or checking their status. Bringing Up Real Network Interfaces To open a real network interface, use the following command as root , replacing N with the number corresponding to the interface ( eth0 and eth1 ). ifup eth N Warning Do not use the ifup scripts to open any floating IP addresses you may configure using Keepalived ( eth0:1 or eth1:1 ). Use the service or systemctl command to start keepalived instead. Bringing Down Real Network Interfaces To bring down a real network interface, use the following command as root , replacing N with the number corresponding to the interface ( eth0 and eth1 ). ifdown eth N Checking the Status of Network Interfaces If you need to check which network interfaces are up at any given time, enter the following command: ip link To view the routing table for a machine, issue the following command: ip route 3.3.2. Firewall Requirements If you are running a firewall (by means of firewalld or iptables ), you must allow VRRP traffic to pass between the keepalived nodes. To configure the firewall to allow the VRRP traffic with firewalld , run the following commands: If the zone is omitted the default zone will be used. If, however, you need to allow the VRRP traffic with iptables , run the following commands:
[ "firewall-cmd --add-rich-rule='rule protocol value=\"vrrp\" accept' --permanent firewall-cmd --reload", "iptables -I INPUT -p vrrp -j ACCEPT iptables-save > /etc/sysconfig/iptables systemctl restart iptables" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-connect-vsa
probe::softirq.exit
probe::softirq.exit Name probe::softirq.exit - Execution of handler for a pending softirq completed Synopsis softirq.exit Values vec_nr softirq vector number action pointer to softirq handler that just finished execution h struct softirq_action* for just executed softirq vec softirq_action vector
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-softirq-exit
7.48. environment-modules
7.48. environment-modules 7.48.1. RHBA-2013:0316 - environment-modules bug fix update Updated environment-modules packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The environment-modules packages provide for the dynamic modification of a user's environment using modulefiles. Each modulefile contains the information needed to configure the shell for an application. Once the package is initialized, the environment can be modified on a per-module basis using the module command which interprets modulefiles. Note The environment-modules package has been upgraded to upstream version 3.2.9c, which provides a number of bug fixes over the version. (BZ# 765630 ) Bug Fixes BZ#818177 Due to an error in the Tcl library, some allocated pointers were invalidated inside the library. Consequently, running the "module switch" command in the tcsh shell led to a segmentation fault. The bug has been fixed and the system memory is now allocated and pointed to correctly. BZ#848865 Previously, the /usr/share/Modules/modulefiles/modules file contained an incorrect path. Consequently, an error occurred when the "module load modules" command was executed. With this update, the incorrect path has been replaced and the described error no longer occurs. All users of environment-modules are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/environment-modules
Chapter 39. Red Hat Enterprise Linux Atomic Host 7.2.5
Chapter 39. Red Hat Enterprise Linux Atomic Host 7.2.5 39.1. Atomic Host OStree update : New Tree Version: 7.2.5 (hash: 9bfe1fb65094d43e420490196de0e9aea26b3923f1c18ead557460b83356f058) Changes since Tree Version 7.2.4 (hash: b060975ce3d5abbf564ca720f64a909d1a4d332aae39cb4de581611526695a0c) Updated packages : rpm-ostree-client-2016.3.1.g5bd7211-2.atomic.el7.1 rpm-ostree-2016.3.1.g5bd7211-1.atomic.el7 ostree-2016.5-3.atomic.el7 cockpit-ostree-0.108-1.el7 New packages : openscap-daemon-0.1.5-1.el7 39.2. Extras Updated packages : atomic-1.10.5-5.el7 cockpit-0.108-1.el7 docker-1.10.3-44.el7 docker-distribution-2.4.1-1.el7 * docker-latest-1.10.3-44.el7 dpdk-2.2.0-3.el7 * etcd-2.2.5-2.el7 kubernetes-1.2.0-0.12.gita4463d9.el7 runc-0.1.1-4.el7 (Technology Preview) * The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. 39.2.1. Container Images Updated : Red Hat Enterprise Linux Container Image (rhel7/rhel) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic Kubernetes-controller Container Image (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes-apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes-scheduler Container Image (rhel7/kubernetes-scheduler) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) (Technology Preview) New : Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) (Technology Preview) 39.3. New Features ostree admin unlock command now available Red Hat Enterprise Linux Atomic Host 7.2.5 introduces the new command ostree admin unlock. It allows users to unlock the current ostree deployment and install packages temporarily. This is done by mounting a writable overlayfs on /usr. When a user reboots, the overlayfs is unmounted and the packages are no longer installed. Use the ostree admin unlock --hotfix option for the changes, such as package installs to persist across reboots. This command provides the same capabilities as atomic-pkglayer, which is now deprecated. There are known issues with overlayfs and SELinux, so this functionality is not intended for long term use. Strict browser security policy for Cockpit is now enforced This defines what code can be run in a Cockpit session and mitigates a number of browser-based attacks.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_2_5
Chapter 4. Installing RHEL AI on IBM cloud
Chapter 4. Installing RHEL AI on IBM cloud For installing and deploying Red Hat Enterprise Linux AI on IBM Cloud, you must first convert the RHEL AI image into an IBM Cloud image. You can then launch an instance using the IBM Cloud image and deploy RHEL AI on an IBM Cloud machine. 4.1. Converting the RHEL AI image into a IBM Cloud image. To create a bootable image in IBM Cloud you must configure your IBM Cloud accounts, set up a Cloud Object Storage (COS) bucket, and create a IBM Cloud image using the RHEL AI image. Prerequisites You installed the IBM CLI on your specific machine. For more information about installing IBM Cloud CLI, see Installing the stand-alone IBM Cloud CLI . Procedure Log in to IBM Cloud with the following command: USD ibmcloud login When prompted, select your desired account to log in to. Example output of the login USD ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating... OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP' You need to set up various IBM Cloud configurations and create your COS bucket before generating a QCOW2 image. You can install the necessary IBM Cloud plugins by running the following command: USD ibmcloud plugin install cloud-object-storage infrastructure-service Set your preferred resource group, the following example command sets the resource group named Default . USD ibmcloud target -g Default Set your preferred region, the following example command sets the us-east region. USD ibmcloud target -r us-east You need to select a deployment plan for your service instance. Ensure you check the properties and pricing on the IBM cloud website. You can list the available deployment plans by running the following command: USD ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name' The following example command uses the premium-global-deployment plan and puts it in the environment variable cos_deploy_plan : USD cos_deploy_plan=premium-global-deployment Create a Cloud Object Storage (COS) service instance and save the name in an environment variable named cos_si_name and create the cloud-object-storage and by running the following commands: USD cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE USD ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan} Get the Cloud Resource Name (CRN) for your Cloud Object Storage (COS) bucket in a variable named cos_crn by running the following commands: USD cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains("cloud-object-storage")) | .crn') USD ibmcloud cos config crn --crn USD{cos_crn} --force Create your Cloud Object Storage (COS) bucket named as the environment variable bucket_name with the following commands: USD bucket_name=NAME_OF_MY_BUCKET USD ibmcloud cos bucket-create --bucket USD{bucket_name} Allow the infrastructure service to read the buckets that are in the service instance USD{cos_si_guid} variable by running the following commands: USD cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains("cloud-object-storage")) | .guid') USD ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid} Now that your IBM Cloud Object Storage (CoS) service instance bucket is set up, you need to download the QCOW2 image from Red Hat Enterprise Linux AI download page Copy the QCOW2 image link and add it to the following command: USD curl -Lo disk.qcow2 "PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE" Set the name you want to use as the RHEL AI IBM Cloud image USD image_name=rhel-ai-20240703v0 Upload the QCOW2 image to the Cloud Object Storage (COS) bucket with your selected region by running following command: USD ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region> Convert the QCOW2 you just uploaded to an IBM Cloud image with the following commands: USD ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol Once the job launches, set the IBM Cloud image configurations into a variable called image_id by running the following command: USD image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name=="'USDimage_name'") | .id') You can view the progress of the job with the following command: USD while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done You can view the information of the newly created image with the following command: USD ibmcloud is image USD{image_id} 4.2. Deploying your instance on IBM Cloud using the CLI You can launch an instance with your new RHEL AI IBM Cloud image from the IBM Cloud web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch an IBM Cloud instance with the custom IBM Cloud image If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI IBM Cloud image. For more information, see "Converting the RHEL AI image to an IBM Cloud image". You installed the IBM CLI on your specific machine, see Installing the stand-alone IBM Cloud CLI . You configured your Virtual private cloud (VPC). You created a subnet for your instance. Procedure Log in to your IBM Cloud account and select the Account, Region and Resource Group by running the following command: USD ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP> Before launching your IBM Cloud instance on the CLI, you need to create several configuration variables for your instance. Install the infrastructure-service plugin for IBM Cloud by running the following command USD ibmcloud plugin install infrastructure-service You need to create an SSH public key for your IBM Cloud account. IBM Cloud supports RSA and ed25519 keys. The following example command uses the ed25519 key types and names it ibmcloud . USD ssh-keygen -f ibmcloud -t ed25519 You can now upload the public key to your IBM Cloud account by following the example command. USD ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519 You need to create a Floating IP for your IBM Cloud instance by following the example command. Ensure you change the region to your preferred zone. USD ibmcloud is floating-ip-reserve my-public-ip --zone <region> You need to select the instance profile that you want to use for the deployment. List all the profiles by running the following command: USD ibmcloud is instance-profiles Make a note of your preferred instance profile, you will need it for your instance deployment. You can now start creating your IBM Cloud instance. Populate environment variables for when you create the instance. name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250 You can now launch your instance, by running the following command: USD ibmcloud is instance-create \ USDname \ USDvpc \ USDzone \ USDinstance_profile \ USDsubnet \ --image USDimage \ --keys USDsshkey \ --boot-volume '{"name": "'USD{name}'-boot", "volume": {"name": "'USD{name}'-boot", "capacity": 'USD{disk_size}', "profile": {"name": "general-purpose"}}}' \ --allow-ip-spoofing false Link the Floating IP to the instance by running the following command: USD ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname User account The default user account in the RHEL AI AMI is cloud-user . It has all permissions via sudo without password. Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train 4.3. Adding more storage to your IBM Cloud instance In [ibm-c], there is a size restriction of 250 GB of storage in the main IBM Cloud disk. RHEL AI might require more storage for models and generation data. You can add more storage by attaching an extra disk to your instance and using it to hold data for RHEL AI. Prerequisites You have a IBM Cloud RHEL AI instance. Procedure Create an environment variable called name that has the name of your instance by running the following command: USD name=my-rhelai-instance Set the size of the new volume by running the following command: USD data_volume_size=1000 Create and attach the instance volume by running the following command: USD ibmcloud is instance-volume-attachment-add data USD{name} \ --new-volume-name USD{name}-data \ --profile general-purpose \ --capacity USD{data_volume_size} You can list all the disks with the following command: USD lsblk Create a disk variable with the content of the disk path your using. The following example command uses the /dev/vdb path. USD disk=/dev/vdb Create a partition on your disk by running the following command: USD sgdisk -n 1:0:0 USDdisk Format and label the partition by running the following command: USD mkfs.xfs -L ilab-data USD{disk}1 You can configure your system to auto mount to your preferred directory. The following example command uses the /mnt directory. USD echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab Reload the systemd service to acknowledge the new configuration on mounts by running the following command: USD systemctl daemon-reload Mount the disk with the following command: USD mount -a Grant write permissions to all users in the new file system by running the following command: USD chmod 1777 /mnt/
[ "ibmcloud login", "ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'", "ibmcloud plugin install cloud-object-storage infrastructure-service", "ibmcloud target -g Default", "ibmcloud target -r us-east", "ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name'", "cos_deploy_plan=premium-global-deployment", "cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE", "ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan}", "cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .crn')", "ibmcloud cos config crn --crn USD{cos_crn} --force", "bucket_name=NAME_OF_MY_BUCKET", "ibmcloud cos bucket-create --bucket USD{bucket_name}", "cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .guid')", "ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid}", "curl -Lo disk.qcow2 \"PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE\"", "image_name=rhel-ai-20240703v0", "ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region>", "ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol", "image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name==\"'USDimage_name'\") | .id')", "while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done", "ibmcloud is image USD{image_id}", "ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP>", "ibmcloud plugin install infrastructure-service", "ssh-keygen -f ibmcloud -t ed25519", "ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519", "ibmcloud is floating-ip-reserve my-public-ip --zone <region>", "ibmcloud is instance-profiles", "name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250", "ibmcloud is instance-create USDname USDvpc USDzone USDinstance_profile USDsubnet --image USDimage --keys USDsshkey --boot-volume '{\"name\": \"'USD{name}'-boot\", \"volume\": {\"name\": \"'USD{name}'-boot\", \"capacity\": 'USD{disk_size}', \"profile\": {\"name\": \"general-purpose\"}}}' --allow-ip-spoofing false", "ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train", "name=my-rhelai-instance", "data_volume_size=1000", "ibmcloud is instance-volume-attachment-add data USD{name} --new-volume-name USD{name}-data --profile general-purpose --capacity USD{data_volume_size}", "lsblk", "disk=/dev/vdb", "sgdisk -n 1:0:0 USDdisk", "mkfs.xfs -L ilab-data USD{disk}1", "echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab", "systemctl daemon-reload", "mount -a", "chmod 1777 /mnt/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/installing/installing_ibm_cloud
Appendix D. LVM Object Tags
Appendix D. LVM Object Tags An LVM tag is a word that can be used to group LVM2 objects of the same type together. Tags can be attached to objects such as physical volumes, volume groups, and logical volumes. Tags can be attached to hosts in a cluster configuration. Tags can be given on the command line in place of PV, VG or LV arguments. Tags should be prefixed with @ to avoid ambiguity. Each tag is expanded by replacing it with all objects possessing that tag which are of the type expected by its position on the command line. LVM tags are strings of up to 1024 characters. LVM tags cannot start with a hyphen. A valid tag can consist of a limited range of characters only. The allowed characters are [A-Za-z0-9_+.-]. As of the Red Hat Enterprise Linux 6.1 release, the list of allowed characters was extended, and tags can contain the /, =, !, :, #, and & characters. Only objects in a volume group can be tagged. Physical volumes lose their tags if they are removed from a volume group; this is because tags are stored as part of the volume group metadata and that is deleted when a physical volume is removed. The following command lists all the logical volumes with the database tag. The following command lists the currently active host tags. D.1. Adding and Removing Object Tags To add or delete tags from physical volumes, use the --addtag or --deltag option of the pvchange command. To add or delete tags from volume groups, use the --addtag or --deltag option of the vgchange or vgcreate commands. To add or delete tags from logical volumes, use the --addtag or --deltag option of the lvchange or lvcreate commands. You can specify multiple --addtag and --deltag arguments within a single pvchange , vgchange , or lvchange command. For example, the following command deletes the tags T9 and T10 and adds the tags T13 and T14 to the volume group grant .
[ "lvs @database", "lvm tags", "vgchange --deltag T9 --deltag T10 --addtag T13 --addtag T14 grant" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvm_tags
Chapter 5. Ansible content migration
Chapter 5. Ansible content migration If you are migrating from an ansible-core version to ansible-core 2.13, consider reviewing Ansible core Porting Guides to familiarize yourself with changes and updates between each version. When reviewing the Ansible core porting guides, ensure that you select the latest version of ansible-core or devel , which is located at the top left column of the guide. For a list of fully supported and certified Ansible Content Collections, see Ansible Automation hub on console.redhat.com . 5.1. Installing Ansible collections As part of the migration from earlier Ansible versions to more recent versions, you need to find and download the collections that include the modules you have been using. Once you find that list of collections, you can use one of the following options to include your collections locally: Download and install the Collection into your runtime or execution environments using ansible-builder . Update the 'requirements.yml' file in your Automation Controller project install roles and collections. This way every time you sync the project in Automation Controller the roles and collections will be downloaded. Note In many cases the upstream and downstream Collections could be the same, but always download your certified collections from Automation Hub. 5.2. Migrating your Ansible playbooks and roles to Core 2.13 When you are migrating from non collection-based content to collection-based content, you should use the Fully Qualified Collection Names (FQCN) in playbooks and roles to avoid unexpected behavior. Example playbook with FQCN: - name: get some info amazon.aws.ec2_vpc_net_info: region: "{{ec2_region}}" register: all_the_info delegate_to: localhost run_once: true If you are using ansible-core modules and are not calling a module from a different Collection, you should use the FQCN ansible.builtin.copy . Example module with FQCN: - name: copy file with owner and permissions ansible.builtin.copy: src: /srv/myfiles/foo.conf dest: /etc/foo.conf owner: foo group: foo mode: '0644' 5.3. Converting playbook examples Examples This example is of a shared directory called /mydata in which we want to be able to read and write files to during a job run. Remember this has to already exist on the execution node we will be using for the automation run. You will target the aape1.local execution node to run this job, because the underlying hosts already has this in place. [awx@aape1 ~]USD ls -la /mydata/ total 4 drwxr-xr-x. 2 awx awx 41 Apr 28 09:27 . dr-xr-xr-x. 19 root root 258 Apr 11 15:16 .. -rw-r--r--. 1 awx awx 33 Apr 11 12:34 file_read -rw-r--r--. 1 awx awx 0 Apr 28 09:27 file_write You will use a simple playbook to launch the automation with sleep defined to allow you access, and to understand the process, as well as demonstrate reading and writing to files. # vim:ft=ansible: - hosts: all gather_facts: false ignore_errors: yes vars: period: 120 myfile: /mydata/file tasks: - name: Collect only selected facts ansible.builtin.setup: filter: - 'ansible_distribution' - 'ansible_machine_id' - 'ansible_memtotal_mb' - 'ansible_memfree_mb' - name: "I'm feeling real sleepy..." ansible.builtin.wait_for: timeout: "{{ period }}" delegate_to: localhost - ansible.builtin.debug: msg: "Isolated paths mounted into execution node: {{ AWX_ISOLATIONS_PATHS }}" - name: "Read pre-existing file..." ansible.builtin.debug: msg: "{{ lookup('file', '{{ myfile }}_read' - name: "Write to a new file..." ansible.builtin.copy: dest: "{{ myfile }}_write" content: | This is the file I've just written to. - name: "Read written out file..." ansible.builtin.debug: msg: "{{ lookup('file', '{{ myfile }}_write') }}" From the Ansible Automation Platform 2 navigation panel, select Settings . Then select Job settings from the Jobs option. Paths to expose isolated jobs: [ "/mydata:/mydata:rw" ] The volume mount is mapped with the same name in the container and has read-write capability. This will get used when you launch the job template. The prompt on launch should be set for extra_vars so you can adjust the sleep duration for each run, The default is 30 seconds. Once launched, and the wait_for module is invoked for the sleep, you can go onto the execution node and look at what is running. To verify the run has completed successfully, run this command to get an output of the job: USD podman exec -it 'podman ps -q' /bin/bash bash-4.4# You are now inside the running execution environment container. Look at the permissions, you will see that awx has become 'root', but this is not really root as in the superuser, as you are using rootless Podman, which maps users into a kernel namespace similar to a sandbox. Learn more about How does rootless Podman work? for shadow-utils. bash-4.4# ls -la /mydata/ Total 4 drwxr-xr-x. 2 root root 41 Apr 28 09:27 . dr-xr-xr-x. 1 root root 77 Apr 28 09:40 .. -rw-r---r-. 1 root root 33 Apr 11 12:34 file_read -rw-r---r-. 1 root root 0 Apr 28 09:27 file_write According to the results, this job failed. In order to understand why, the remaining output needs to be examined. TASK [Read pre-existing file...]******************************* 10:50:12 ok: [localhost] => { "Msg": "This is the file I am reading in." TASK {Write to a new file...}********************************* 10:50:12 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b' /mydata/file_write' Fatal: [localhost]: FAILED! => {"changed": false, :checksum": "9f576o85d584287a3516ee8b3385cc6f69bf9ce", "msg": "Unable to make b'/root/.ansible/tmp/anisible-tim-1651139412.9808054-40-91081834383738/source' into /mydata/file_write, failed final rename from b'/mydata/.ansible_tmpazyqyqdrfile_write': [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b'/mydata/file_write} ...ignoring TASK [Read written out file...] ****************************** 10:50:13 Fatal: [localhost]: FAILED: => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError;>, original message: could not locate file in lookup: /mydate/file_write. Vould not locate file in lookup: /mydate/file_write"} ...ignoring The job failed, even though :rw is set, so it should have write capability. The process was able to read the existing file, but not write out. This is due to SELinux protection that requires proper labels to be placed on the volume content mounted into the container. If the label is missing, SELinux may prevent the process from running inside the container. Labels set by the OS are not changed by Podman. See the Podman documentation for more information. This could be a common misinterpretation. We have set the default to :z , which tells Podman to relabel file objects on shared volumes. So we can either add :z or leave it off. Paths to expose isolated jobs: [ "/mydata:/mydata" ] The playbook will now work as expected: PLAY [all] **************************************************** 11:05:52 TASK [I'm feeling real sleepy. . .] *************************** 11:05:52 ok: [localhost] TASK [Read pre-existing file...] ****************************** 11:05:57 ok: [localhost] => { "Msg": "This is the file I'm reading in." } TASK [Write to a new file...] ********************************** 11:05:57 ok: [localhost] TASK [Read written out file...] ******************************** 11:05:58 ok: [localhost] => { "Msg": "This is the file I've just written to." Back on the underlying execution node host, we have the newly written out contents. Note If you are using container groups to launch automation jobs inside Red Hat OpenShift, you can also tell Ansible Automation Platform 2 to expose the same paths to that environment, but you must toggle the default to On under settings. Once enabled, this will inject this as volumeMounts and volumes inside the pod spec that will be used for execution. It will look like this: apiVersion: v1 kind: Pod Spec: containers: - image: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8 args: - ansible runner - worker - -private-data-dir=/runner volumeMounts: mountPath: /mnt2 name: volume-0 readOnly: true mountPath: /mnt3 name: volume-1 readOnly: true mountPath: /mnt4 name: volume-2 readOnly: true volumes: hostPath: path: /mnt2 type: "" name: volume-0 hostPath: path: /mnt3 type: "" name: volume-1 hostPath: path: /mnt4 type: "" name: volume-2 Storage inside the running container is using the overlay file system. Any modifications inside the running container are destroyed after the job completes, much like a tmpfs being unmounted.
[ "- name: get some info amazon.aws.ec2_vpc_net_info: region: \"{{ec2_region}}\" register: all_the_info delegate_to: localhost run_once: true", "- name: copy file with owner and permissions ansible.builtin.copy: src: /srv/myfiles/foo.conf dest: /etc/foo.conf owner: foo group: foo mode: '0644'", "[awx@aape1 ~]USD ls -la /mydata/ total 4 drwxr-xr-x. 2 awx awx 41 Apr 28 09:27 . dr-xr-xr-x. 19 root root 258 Apr 11 15:16 .. -rw-r--r--. 1 awx awx 33 Apr 11 12:34 file_read -rw-r--r--. 1 awx awx 0 Apr 28 09:27 file_write", "vim:ft=ansible:", "- hosts: all gather_facts: false ignore_errors: yes vars: period: 120 myfile: /mydata/file tasks: - name: Collect only selected facts ansible.builtin.setup: filter: - 'ansible_distribution' - 'ansible_machine_id' - 'ansible_memtotal_mb' - 'ansible_memfree_mb' - name: \"I'm feeling real sleepy...\" ansible.builtin.wait_for: timeout: \"{{ period }}\" delegate_to: localhost - ansible.builtin.debug: msg: \"Isolated paths mounted into execution node: {{ AWX_ISOLATIONS_PATHS }}\" - name: \"Read pre-existing file...\" ansible.builtin.debug: msg: \"{{ lookup('file', '{{ myfile }}_read' - name: \"Write to a new file...\" ansible.builtin.copy: dest: \"{{ myfile }}_write\" content: | This is the file I've just written to. - name: \"Read written out file...\" ansible.builtin.debug: msg: \"{{ lookup('file', '{{ myfile }}_write') }}\"", "[ \"/mydata:/mydata:rw\" ]", "podman exec -it 'podman ps -q' /bin/bash bash-4.4#", "bash-4.4# ls -la /mydata/ Total 4 drwxr-xr-x. 2 root root 41 Apr 28 09:27 . dr-xr-xr-x. 1 root root 77 Apr 28 09:40 .. -rw-r---r-. 1 root root 33 Apr 11 12:34 file_read -rw-r---r-. 1 root root 0 Apr 28 09:27 file_write", "TASK [Read pre-existing file...]******************************* 10:50:12 ok: [localhost] => { \"Msg\": \"This is the file I am reading in.\" TASK {Write to a new file...}********************************* 10:50:12 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b' /mydata/file_write' Fatal: [localhost]: FAILED! => {\"changed\": false, :checksum\": \"9f576o85d584287a3516ee8b3385cc6f69bf9ce\", \"msg\": \"Unable to make b'/root/.ansible/tmp/anisible-tim-1651139412.9808054-40-91081834383738/source' into /mydata/file_write, failed final rename from b'/mydata/.ansible_tmpazyqyqdrfile_write': [Errno 13] Permission denied: b'/mydata/.ansible_tmpazyqyqdrfile_write' -> b'/mydata/file_write} ...ignoring TASK [Read written out file...] ****************************** 10:50:13 Fatal: [localhost]: FAILED: => {\"msg\": \"An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError;>, original message: could not locate file in lookup: /mydate/file_write. Vould not locate file in lookup: /mydate/file_write\"} ...ignoring", "[ \"/mydata:/mydata\" ]", "PLAY [all] **************************************************** 11:05:52 TASK [I'm feeling real sleepy. . .] *************************** 11:05:52 ok: [localhost] TASK [Read pre-existing file...] ****************************** 11:05:57 ok: [localhost] => { \"Msg\": \"This is the file I'm reading in.\" } TASK [Write to a new file...] ********************************** 11:05:57 ok: [localhost] TASK [Read written out file...] ******************************** 11:05:58 ok: [localhost] => { \"Msg\": \"This is the file I've just written to.\"", "apiVersion: v1 kind: Pod Spec: containers: - image: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8 args: - ansible runner - worker - -private-data-dir=/runner volumeMounts: mountPath: /mnt2 name: volume-0 readOnly: true mountPath: /mnt3 name: volume-1 readOnly: true mountPath: /mnt4 name: volume-2 readOnly: true volumes: hostPath: path: /mnt2 type: \"\" name: volume-0 hostPath: path: /mnt3 type: \"\" name: volume-1 hostPath: path: /mnt4 type: \"\" name: volume-2" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/content-migration
7.312. dbus-glib
7.312. dbus-glib 7.312.1. RHSA-2013:0568 - Important: dbus-glib security update Updated dbus-glib packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. dbus-glib is an add-on library to integrate the standard D-Bus library with the GLib main loop and threading model. Security Fix CVE-2013-0292 A flaw was found in the way dbus-glib filtered the message sender (message source subject) when the "NameOwnerChanged" signal was received. This could trick a system service using dbus-glib (such as fprintd) into believing a signal was sent from a privileged process, when it was not. A local attacker could use this flaw to escalate their privileges. All dbus-glib users are advised to upgrade to these updated packages, which contain a backported patch to correct this issue. All running applications linked against dbus-glib, such as fprintd and NetworkManager, must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/dbus-glib
13.14. Software Selection
13.14. Software Selection To specify which packages will be installed, select Software Selection at the Installation Summary screen. The package groups are organized into Base Environments . These environments are pre-defined sets of packages with a specific purpose; for example, the Virtualization Host environment contains a set of software packages needed for running virtual machines on the system. Only one software environment can be selected at installation time. For each environment, there are additional packages available in the form of Add-ons . Add-ons are presented in the right part of the screen and the list of them is refreshed when a new environment is selected. You can select multiple add-ons for your installation environment. A horizontal line separates the list of add-ons into two areas: Add-ons listed above the horizontal line are specific to the environment you selected. If you select any add-ons in this part of the list and then select a different environment, your selection will be lost. Add-ons listed below the horizontal line are available for all environments. Selecting a different environment will not impact the selections made in this part of the list. Figure 13.15. Example of a Software Selection for a Server Installation The availability of base environments and add-ons depends on the variant of the installation ISO image which you are using as the installation source. For example, the server variant provides environments designed for servers, while the workstation variant has several choices for deployment as a developer workstation, and so on. The installation program does not show which packages are contained in the available environments. To see which packages are contained in a specific environment or add-on, see the repodata/*-comps- variant . architecture .xml file on the Red Hat Enterprise Linux Installation DVD which you are using as the installation source. This file contains a structure describing available environments (marked by the <environment> tag) and add-ons (the <group> tag). Important The pre-defined environments and add-ons allow you to customize your system, but in a manual installation, there is no way to select individual packages to install. If you are not sure what package should be installed, Red Hat recommends you to select the Minimal Install environment. Minimal install only installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. This will substantially reduce the chance of the system being affected by a vulnerability. After the system finishes installing and you log in for the first time, you can use the Yum package manager to install any additional software you need. For more details on Minimal install , see the Installing the Minimum Amount of Packages Required section of the Red Hat Enterprise Linux 7 Security Guide. Alternatively, automating the installation with a Kickstart file allows for a much higher degree of control over installed packages. You can specify environments, groups and individual packages in the %packages section of the Kickstart file. See Section 27.3.2, "Package Selection" for instructions on selecting packages to install in a Kickstart file, and Chapter 27, Kickstart Installations for general information about automating the installation with Kickstart. Once you have selected an environment and add-ons to be installed, click Done to return to the Installation Summary screen. 13.14.1. Core Network Services All Red Hat Enterprise Linux installations include the following network services: centralized logging through the rsyslog service email through SMTP (Simple Mail Transfer Protocol) network file sharing through NFS (Network File System) remote access through SSH (Secure SHell) resource advertising through mDNS (multicast DNS) Some automated processes on your Red Hat Enterprise Linux system use the email service to send reports and messages to the system administrator. By default, the email, logging, and printing services do not accept connections from other systems. You can configure your Red Hat Enterprise Linux system after installation to offer email, file sharing, logging, printing, and remote desktop access services. The SSH service is enabled by default. You can also use NFS to access files on other systems without enabling the NFS sharing service.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-package-selection-ppc
Planning Identity Management
Planning Identity Management Red Hat Enterprise Linux 9 Planning the infrastructure and service integration of an IdM environment Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/index
Chapter 39. Google Pubsub
Chapter 39. Google Pubsub Since Camel 2.19 Both producer and consumer are supported. The Google Pubsub component provides access to the Cloud Pub/Sub Infrastructure via the Google Cloud Java Client for Google Cloud Pub/Sub . 39.1. Dependencies When using google-pubsub with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-pubsub-starter</artifactId> </dependency> 39.2. URI Format The Google Pubsub Component uses the following URI format: Destination Name can be either a topic or a subscription name. 39.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 39.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 39.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 39.4. Component Options The Google Pubsub component supports 10 options, which are listed below. Name Description Default Type authenticate (common) Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true boolean endpoint (common) Endpoint to use with local Pub/Sub emulator. String serviceAccountKey (common) The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean synchronousPullRetryableCodes (consumer) Comma-separated list of additional retryable error codes for synchronous pull. By default the PubSub client library retries ABORTED, UNAVAILABLE, UNKNOWN. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean publisherCacheSize (producer) Maximum number of producers to cache. This could be increased if you have producers for lots of different topics. int publisherCacheTimeout (producer) How many milliseconds should each producer stay alive in the cache. int autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean publisherTerminationTimeout (advanced) How many milliseconds should a producer be allowed to terminate. int 39.5. Endpoint Options The Google Pubsub endpoint is configured using URI syntax: with the following path and query parameters: 39.5.1. Path Parameters (2 parameters) Name Description Default Type projectId (common) Required The Google Cloud PubSub Project Id. String destinationName (common) Required The Destination Name. For the consumer this will be the subscription name, while for the producer this will be the topic name. String 39.5.2. Query Parameters (15 parameters) Name Description Default Type authenticate (common) Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true boolean loggerId (common) Logger ID to use when a match to the parent route required. String serviceAccountKey (common) The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String ackMode (consumer) AUTO = exchange gets ack'ed/nack'ed on completion. NONE = downstream process has to ack/nack explicitly. Enum values: AUTO NONE AUTO AckMode concurrentConsumers (consumer) The number of parallel streams consuming from the subscription. 1 Integer maxAckExtensionPeriod (consumer) Set the maximum period a message ack deadline will be extended. Value in seconds. 3600 int maxMessagesPerPoll (consumer) The max number of messages to receive from the server in a single API call. 1 Integer synchronousPull (consumer) Synchronously pull batches of messages. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean messageOrderingEnabled (producer (advanced)) Should message ordering be enabled. false boolean pubsubEndpoint (producer (advanced)) Pub/Sub endpoint to use. Required when using message ordering, and ensures that messages are received in order even when multiple publishers are used. String serializer (producer (advanced)) Autowired A custom GooglePubsubSerializer to use for serializing message payloads in the producer. GooglePubsubSerializer 39.6. Message Headers The Google Pubsub component supports 5 message header(s), which are listed below: Name Description Default Type CamelGooglePubsubMessageId (common) Constant: MESSAGE_ID The ID of the message, assigned by the server when the message is published. String CamelGooglePubsubMsgAckId (consumer) Constant: ACK_ID The ID used to acknowledge the received message. String CamelGooglePubsubPublishTime (consumer) Constant: PUBLISH_TIME The time at which the message was published. Timestamp CamelGooglePubsubAttributes (common) Constant: ATTRIBUTES The attributes of the message. Map CamelGooglePubsubOrderingKey (producer) Constant: ORDERING_KEY If non-empty, identifies related messages for which publish order should be respected. String 39.7. Producer Endpoints Producer endpoints can accept and deliver to PubSub individual and grouped exchanges alike. Grouped exchanges have Exchange.GROUPED_EXCHANGE property set. Google PubSub expects the payload to be byte[] array, Producer endpoints will send: String body as byte[] encoded as UTF-8 byte[] body as is Everything else will be serialised into byte[] array A Map set as message header GooglePubsubConstants.ATTRIBUTES will be sent as PubSub attributes. Google PubSub supports ordered message delivery. To enable this set set the options messageOrderingEnabled to true, and the pubsubEndpoint to a GCP region. When producing messages set the message header GooglePubsubConstants.ORDERING_KEY . This will be set as the PubSub orderingKey for the message. More information in Ordering messages . Once exchange has been delivered to PubSub the PubSub Message ID will be assigned to the header GooglePubsubConstants.MESSAGE_ID . 39.8. Consumer Endpoints Google PubSub will redeliver the message if it has not been acknowledged within the time period set as a configuration option on the subscription. The component will acknowledge the message once exchange processing has been completed. If the route throws an exception, the exchange is marked as failed and the component will NACK the message - it will be redelivered immediately. To ack/nack the message the component uses Acknowledgement ID stored as header GooglePubsubConstants.ACK_ID . If the header is removed or tampered with, the ack will fail and the message will be redelivered again after the ack deadline. 39.9. Message Body The consumer endpoint returns the content of the message as byte[] - exactly as the underlying system sends it. It is up for the route to convert/unmarshall the contents. 39.10. Authentication Configuration By default this component aquires credentials using GoogleCredentials.getApplicationDefault() . This behavior can be disabled by setting authenticate option to false , in which case requests to Google API will be made without authentication details. This is only desirable when developing against an emulator. This behavior can be altered by supplying a path to a service account key file. 39.11. Rollback and Redelivery The rollback for Google PubSub relies on the idea of the Acknowledgement Deadline - the time period where Google PubSub expects to receive the acknowledgement. If the acknowledgement has not been received, the message is redelivered. Google provides an API to extend the deadline for a message. More information in Google PubSub Documentation . So, rollback is effectively a deadline extension API call with zero value - i.e. deadline is reached now and message can be redelivered to the consumer. It is possible to delay the message redelivery by setting the acknowledgement deadline explicitly for the rollback by setting the message header GooglePubsubConstants.ACK_DEADLINE to the value in seconds. 39.12. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.google-pubsub.authenticate Use Credentials when interacting with PubSub service (no authentication is required when using emulator). true Boolean camel.component.google-pubsub.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.google-pubsub.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.google-pubsub.enabled Whether to enable auto configuration of the google-pubsub component. This is enabled by default. Boolean camel.component.google-pubsub.endpoint Endpoint to use with local Pub/Sub emulator. String camel.component.google-pubsub.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.google-pubsub.publisher-cache-size Maximum number of producers to cache. This could be increased if you have producers for lots of different topics. Integer camel.component.google-pubsub.publisher-cache-timeout How many milliseconds should each producer stay alive in the cache. Integer camel.component.google-pubsub.publisher-termination-timeout How many milliseconds should a producer be allowed to terminate. Integer camel.component.google-pubsub.service-account-key The Service account key that can be used as credentials for the PubSub publisher/subscriber. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.google-pubsub.synchronous-pull-retryable-codes Comma-separated list of additional retryable error codes for synchronous pull. By default the PubSub client library retries ABORTED, UNAVAILABLE, UNKNOWN. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-pubsub-starter</artifactId> </dependency>", "google-pubsub://project-id:destinationName?[options]", "google-pubsub:projectId:destinationName" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-google-pubsub-component-starter
Chapter 41. Using and configuring firewalld
Chapter 41. Using and configuring firewalld A firewall is a way to protect machines from any unwanted traffic from outside. It enables users to control incoming network traffic on host machines by defining a set of firewall rules . These rules are used to sort the incoming traffic and either block it or allow through. firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the necessity to restart the firewall daemon each time the rules are changed. firewalld uses the concepts of zones and services, that simplify the traffic management. Zones are predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed depends on the network your computer is connected to and the security level this network is assigned. Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a specific service and they apply within a zone. Services use one or more ports or addresses for network communication. Firewalls filter communication based on ports. To allow network traffic for a service, its ports must be open. firewalld blocks all traffic on ports that are not explicitly set as open. Some zones, such as trusted, allow all traffic by default. Note that firewalld with nftables backend does not support passing custom nftables rules to firewalld , using the --direct option. 41.1. When to use firewalld, nftables, or iptables The following is a brief overview in which scenario you should use one of the following utilities: firewalld : Use the firewalld utility for simple firewall use cases. The utility is easy to use and covers the typical use cases for these scenarios. nftables : Use the nftables utility to set up complex and performance-critical firewalls, such as for a whole network. iptables : The iptables utility on Red Hat Enterprise Linux uses the nf_tables kernel API instead of the legacy back end. The nf_tables API provides backward compatibility so that scripts that use iptables commands still work on Red Hat Enterprise Linux. For new firewall scripts, Red Hat recommends to use nftables . Important To prevent the different firewall-related services ( firewalld , nftables , or iptables ) from influencing each other, run only one of them on a RHEL host, and disable the other services. 41.2. Firewall zones You can use the firewalld utility to separate networks into different zones according to the level of trust that you have with the interfaces and traffic within that network. A connection can only be part of one zone, but you can use that zone for many network connections. firewalld follows strict principles in regards to zones: Traffic ingresses only one zone. Traffic egresses only one zone. A zone defines a level of trust. Intrazone traffic (within the same zone) is allowed by default. Interzone traffic (from zone to zone) is denied by default. Principles 4 and 5 are a consequence of principle 3. Principle 4 is configurable through the zone option --remove-forward . Principle 5 is configurable by adding new policies. NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with the following utilities: NetworkManager firewall-config utility firewall-cmd utility The RHEL web console The RHEL web console, firewall-config , and firewall-cmd can only edit the appropriate NetworkManager configuration files. If you change the zone of the interface using the web console, firewall-cmd , or firewall-config , the request is forwarded to NetworkManager and is not handled by firewalld . The /usr/lib/firewalld/zones/ directory stores the predefined zones, and you can instantly apply them to any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after they are modified. The default settings of the predefined zones are as follows: block Suitable for: Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6 . Accepts: Only network connections initiated from within the system. dmz Suitable for: Computers in your DMZ that are publicly-accessible with limited access to your internal network. Accepts: Only selected incoming connections. drop Suitable for: Any incoming network packets are dropped without any notification. Accepts: Only outgoing network connections. external Suitable for: External networks with masquerading enabled, especially for routers. Situations when you do not trust the other computers on the network. Accepts: Only selected incoming connections. home Suitable for: Home environment where you mostly trust the other computers on the network. Accepts: Only selected incoming connections. internal Suitable for: Internal networks where you mostly trust the other computers on the network. Accepts: Only selected incoming connections. public Suitable for: Public areas where you do not trust other computers on the network. Accepts: Only selected incoming connections. trusted Accepts: All network connections. work Suitable for: Work environment where you mostly trust the other computers on the network. Accepts: Only selected incoming connections. One of these zones is set as the default zone. When interface connections are added to NetworkManager , they are assigned to the default zone. On installation, the default zone in firewalld is the public zone. You can change the default zone. Note Make network zone names self-explanatory to help users understand them quickly. To avoid any security problems, review the default zone configuration and disable any unnecessary services according to your needs and risk assessments. Additional resources firewalld.zone(5) man page on your system 41.3. Firewall policies The firewall policies specify the desired security state of your network. They outline rules and actions to take for different types of traffic. Typically, the policies contain rules for the following types of traffic: Incoming traffic Outgoing traffic Forward traffic Specific services and applications Network address translations (NAT) Firewall policies use the concept of firewall zones. Each zone is associated with a specific set of firewall rules that determine the traffic allowed. Policies apply firewall rules in a stateful, unidirectional manner. This means you only consider one direction of the traffic. The traffic return path is implicitly allowed due to stateful filtering of firewalld . Policies are associated with an ingress zone and an egress zone. The ingress zone is where the traffic originated (received). The egress zone is where the traffic leaves (sent). The firewall rules defined in a policy can reference the firewall zones to apply consistent configurations across multiple network interfaces. 41.4. Firewall rules You can use the firewall rules to implement specific configurations for allowing or blocking network traffic. As a result, you can control the flow of network traffic to protect your system from security threats. Firewall rules typically define certain criteria based on various attributes. The attributes can be as: Source IP addresses Destination IP addresses Transfer Protocols (TCP, UDP, ... ) Ports Network interfaces The firewalld utility organizes the firewall rules into zones (such as public , internal , and others) and policies. Each zone has its own set of rules that determine the level of traffic freedom for network interfaces associated with a particular zone. 41.5. Zone configuration files A firewalld zone configuration file contains the information for a zone. These are the zone description, services, ports, protocols, icmp-blocks, masquerade, forward-ports and rich language rules in an XML file format. The file name has to be zone-name .xml where the length of zone-name is currently limited to 17 chars. The zone configuration files are located in the /usr/lib/firewalld/zones/ and /etc/firewalld/zones/ directories. The following example shows a configuration that allows one service ( SSH ) and one port range, for both the TCP and UDP protocols: <?xml version="1.0" encoding="utf-8"?> <zone> <short>My Zone</short> <description>Here you can describe the characteristic features of the zone.</description> <service name="ssh"/> <port protocol="udp" port="1025-65535"/> <port protocol="tcp" port="1025-65535"/> </zone> Additional resources firewalld.zone manual page 41.6. Predefined firewalld services The firewalld service is a predefined set of firewall rules that define access to a specific application or network service. Each service represents a combination of the following elements: Local port Network protocol Associated firewall rules Source ports and destinations Firewall helper modules that load automatically if a service is enabled A service simplifies packet filtering and saves you time because it achieves several tasks at once. For example, firewalld can perform the following tasks at once: Open a port Define network protocol Enable packet forwarding Service configuration options and generic file information are described in the firewalld.service(5) man page on your system. The services are specified by means of individual XML configuration files, which are named in the following format: service-name .xml . Protocol names are preferred over service or application names in firewalld . You can configure firewalld in the following ways: Use utilities: firewall-config - graphical utility firewall-cmd - command-line utility firewall-offline-cmd - command-line utility Edit the XML files in the /etc/firewalld/services/ directory. If you do not add or change the service, no corresponding XML file exists in /etc/firewalld/services/ . You can use the files in /usr/lib/firewalld/services/ as templates. Additional resources firewalld.service(5) man page on your system 41.7. Working with firewalld zones Zones represent a concept to manage incoming traffic more transparently. The zones are connected to networking interfaces or assigned a range of source addresses. You manage firewall rules for each zone independently, which enables you to define complex firewall settings and apply them to the traffic. 41.7.1. Customizing firewall settings for a specific zone to enhance security You can strengthen your network security by modifying the firewall settings and associating a specific network interface or connection with a particular firewall zone. By defining granular rules and restrictions for a zone, you can control inbound and outbound traffic based on your intended security levels. For example, you can achieve the following benefits: Protection of sensitive data Prevention of unauthorized access Mitigation of potential network threats Prerequisites The firewalld service is running. Procedure List the available firewall zones: The firewall-cmd --get-zones command displays all zones that are available on the system, but it does not show any details for particular zones. To see more detailed information for all zones, use the firewall-cmd --list-all-zones command. Choose the zone you want to use for this configuration. Modify firewall settings for the chosen zone. For example, to allow the SSH service and remove the ftp service: Assign a network interface to the firewall zone: List the available network interfaces: Activity of a zone is determined by the presence of network interfaces or source address ranges that match its configuration. The default zone is active for unclassified traffic but is not always active if no traffic matches its rules. Assign a network interface to the chosen zone: Assigning a network interface to a zone is more suitable for applying consistent firewall settings to all traffic on a particular interface (physical or virtual). The firewall-cmd command, when used with the --permanent option, often involves updating NetworkManager connection profiles to make changes to the firewall configuration permanent. This integration between firewalld and NetworkManager ensures consistent network and firewall settings. Verification Display the updated settings for your chosen zone: The command output displays all zone settings including the assigned services, network interface, and network connections (sources). 41.7.2. Changing the default zone System administrators assign a zone to a networking interface in its configuration files. If an interface is not assigned to a specific zone, it is assigned to the default zone. After each restart of the firewalld service, firewalld loads the settings for the default zone and makes it active. Note that settings for all other zones are preserved and ready to be used. Typically, zones are assigned to interfaces by NetworkManager according to the connection.zone setting in NetworkManager connection profiles. Also, after a reboot NetworkManager manages assignments for "activating" those zones. Prerequisites The firewalld service is running. Procedure To set up the default zone: Display the current default zone: Set the new default zone: Note Following this procedure, the setting is a permanent setting, even without the --permanent option. 41.7.3. Assigning a network interface to a zone It is possible to define different sets of rules for different zones and then change the settings quickly by changing the zone for the interface that is being used. With multiple interfaces, a specific zone can be set for each of them to distinguish traffic that is coming through them. Procedure To assign the zone to a specific interface: List the active zones and the interfaces assigned to them: Assign the interface to a different zone: 41.7.4. Assigning a zone to a connection using nmcli You can add a firewalld zone to a NetworkManager connection using the nmcli utility. Procedure Assign the zone to the NetworkManager connection profile: Activate the connection: 41.7.5. Manually assigning a zone to a network connection in a connection profile file If you cannot use the nmcli utility to modify a connection profile, you can manually edit the corresponding file of the profile to assign a firewalld zone. Note Modifying the connection profile with the nmcli utility to assign a firewalld zone is more efficient. For details, see Assigning a network interface to a zone . Procedure Determine the path to the connection profile and its format: NetworkManager uses separate directories and file names for the different connection profile formats: Profiles in /etc/NetworkManager/system-connections/ <connection_name> .nmconnection files use the keyfile format. Profiles in /etc/sysconfig/network-scripts/ifcfg- <interface_name> files use the ifcfg format. Depending on the format, update the corresponding file: If the file uses the keyfile format, append zone= <name> to the [connection] section of the /etc/NetworkManager/system-connections/ <connection_name> .nmconnection file: If the file uses the ifcfg format, append ZONE= <name> to the /etc/sysconfig/network-scripts/ifcfg- <interface_name> file: Reload the connection profiles: Reactivate the connection profiles Verification Display the zone of the interface, for example: 41.7.6. Manually assigning a zone to a network connection in an ifcfg file When the connection is managed by NetworkManager , it must be aware of a zone that it uses. For every network connection profile, a zone can be specified, which provides the flexibility of various firewall settings according to the location of the computer with portable devices. Thus, zones and settings can be specified for different locations, such as company or home. Procedure To set a zone for a connection, edit the /etc/sysconfig/network-scripts/ifcfg- connection_name file and add a line that assigns a zone to this connection: 41.7.7. Creating a new zone To use custom zones, create a new zone and use it just like a predefined zone. New zones require the --permanent option, otherwise the command does not work. Prerequisites The firewalld service is running. Procedure Create a new zone: Make the new zone usable: The command applies recent changes to the firewall configuration without interrupting network services that are already running. Verification Check if the new zone is added to your permanent settings: 41.7.8. Enabling zones by using the web console You can apply predefined and existing firewall zones on a particular interface or a range of IP addresses through the RHEL web console. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrator privileges. In the Firewall section, click Add new zone . In the Add zone dialog box, select a zone from the Trust level options. The web console displays all zones predefined in the firewalld service. In the Interfaces part, select an interface or interfaces on which the selected zone is applied. In the Allowed Addresses part, you can select whether the zone is applied on: the whole subnet or a range of IP addresses in the following format: 192.168.1.0 192.168.1.0/24 192.168.1.0/24, 192.168.1.0 Click on the Add zone button. Verification Check the configuration in the Firewall section: 41.7.9. Disabling zones by using the web console You can disable a firewall zone in your firewall configuration by using the web console. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrator privileges. Click on the Options icon at the zone you want to remove. Click Delete . The zone is now disabled and the interface does not include opened services and ports which were configured in the zone. 41.7.10. Using zone targets to set default behavior for incoming traffic For every zone, you can set a default behavior that handles incoming traffic that is not further specified. Such behavior is defined by setting the target of the zone. There are four options: ACCEPT : Accepts all incoming packets except those disallowed by specific rules. REJECT : Rejects all incoming packets except those allowed by specific rules. When firewalld rejects packets, the source machine is informed about the rejection. DROP : Drops all incoming packets except those allowed by specific rules. When firewalld drops packets, the source machine is not informed about the packet drop. default : Similar behavior as for REJECT , but with special meanings in certain scenarios. Prerequisites The firewalld service is running. Procedure To set a target for a zone: List the information for the specific zone to see the default target: Set a new target in the zone: Additional resources firewall-cmd(1) man page on your system 41.8. Controlling network traffic using firewalld The firewalld package installs a large number of predefined service files and you can add more or customize them. You can then use these service definitions to open or close ports for services without knowing the protocol and port numbers they use. 41.8.1. Controlling traffic with predefined services using the CLI The most straightforward method to control traffic is to add a predefined service to firewalld . This opens all necessary ports and modifies other settings according to the service definition file . Prerequisites The firewalld service is running. Procedure Check that the service in firewalld is not already allowed: The command lists the services that are enabled in the default zone. List all predefined services in firewalld : The command displays a list of available services for the default zone. Add the service to the list of services that firewalld allows: The command adds the specified service to the default zone. Make the new settings persistent: The command applies these runtime changes to the permanent configuration of the firewall. By default, it applies these changes to the configuration of the default zone. Verification List all permanent firewall rules: The command displays complete configuration with the permanent firewall rules of the default firewall zone ( public ). Check the validity of the permanent configuration of the firewalld service. If the permanent configuration is invalid, the command returns an error with further details: You can also manually inspect the permanent configuration files to verify the settings. The main configuration file is /etc/firewalld/firewalld.conf . The zone-specific configuration files are in the /etc/firewalld/zones/ directory and the policies are in the /etc/firewalld/policies/ directory. 41.8.2. Controlling traffic with predefined services using the GUI You can control the network traffic with predefined services using a graphical user interface. The Firewall Configuration application provides an accessible and user-friendly alternative to the command-line utilities. Prerequisites You installed the firewall-config package. The firewalld service is running. Procedure To enable or disable a predefined or custom service: Start the firewall-config utility and select the network zone whose services are to be configured. Select the Zones tab and then the Services tab below. Select the checkbox for each type of service you want to trust or clear the checkbox to block a service in the selected zone. To edit a service: Start the firewall-config utility. Select Permanent from the menu labeled Configuration . Additional icons and menu buttons appear at the bottom of the Services window. Select the service you want to configure. The Ports , Protocols , and Source Port tabs enable adding, changing, and removing of ports, protocols, and source port for the selected service. The modules tab is for configuring Netfilter helper modules. The Destination tab enables limiting traffic to a particular destination address and Internet Protocol ( IPv4 or IPv6 ). Note It is not possible to alter service settings in the Runtime mode. Verification Press the Super key to enter the Activities overview. Select the Firewall Configuration utility. You can also start the graphical firewall configuration utility using the command-line, by entering the firewall-config command. View the list of configurations of your firewall: The Firewall Configuration window opens. Note that this command can be run as a normal user, but you are prompted for an administrator password occasionally. 41.8.3. Enabling services on the firewall by using the web console By default, services are added to the default firewall zone. If you use more firewall zones on more network interfaces, you must select a zone first and then add the service with port. The RHEL 8 web console displays predefined firewalld services and you can add them to active firewall zones. Important The RHEL 8 web console configures the firewalld service. The web console does not allow generic firewalld rules which are not listed in the web console. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrator privileges. In the Firewall section, select a zone for which you want to add the service and click Add Services . In the Add Services dialog box, find the service you want to enable on the firewall. Enable services according to your scenario: Click Add Services . At this point, the RHEL 8 web console displays the service in the zone's list of Services . 41.8.4. Configuring custom ports by using the web console You can add configure custom ports for services through the RHEL web console. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The firewalld service is running. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Networking . Click on the Edit rules and zones button. If you do not see the Edit rules and zones button, log in to the web console with the administrative privileges. In the Firewall section, select a zone for which you want to configure a custom port and click Add Services . In the Add services dialog box, click on the Custom Ports radio button. In the TCP and UDP fields, add ports according to examples. You can add ports in the following formats: Port numbers such as 22 Range of port numbers such as 5900-5910 Aliases such as nfs, rsync Note You can add multiple values into each field. Values must be separated with the comma and without the space, for example: 8080,8081,http After adding the port number in the TCP filed, the UDP filed, or both, verify the service name in the Name field. The Name field displays the name of the service for which is this port reserved. You can rewrite the name if you are sure that this port is free to use and no server needs to communicate on this port. In the Name field, add a name for the service including defined ports. Click on the Add Ports button. To verify the settings, go to the Firewall page and find the service in the list of zone's Services . 41.8.5. Configuring firewalld to allow hosting a secure web server Ports are logical services that enable an operating system to receive and distinguish network traffic and forward it to system services. The system services are represented by a daemon that listens on the port and waits for any traffic coming to this port. Normally, system services listen on standard ports that are reserved for them. The httpd daemon, for example, listens on port 80. However, system administrators can directly specify the port number instead of the service name. You can use the firewalld service to configure access to a secure web server for hosting your data. Prerequisites The firewalld service is running. Procedure Check the currently active firewall zone: Add the HTTPS service to the appropriate zone: Reload the firewall configuration: Verification Check if the port is open in firewalld : If you opened the port by specifying the port number, enter: If you opened the port by specifying a service definition, enter: 41.8.6. Closing unused or unnecessary ports to enhance network security When an open port is no longer needed, you can use the firewalld utility to close it. Important Close all unnecessary ports to reduce the potential attack surface and minimize the risk of unauthorized access or exploitation of vulnerabilities. Procedure List all allowed ports: By default, this command lists the ports that are enabled in the default zone. Note This command will only give you a list of ports that are opened as ports. You will not be able to see any open ports that are opened as a service. For that case, consider using the --list-all option instead of --list-ports . Remove the port from the list of allowed ports to close it for the incoming traffic: This command removes a port from a zone. If you do not specify a zone, it will remove the port from the default zone. Make the new settings persistent: Without specifying a zone, this command applies runtime changes to the permanent configuration of the default zone. Verification List the active zones and choose the zone you want to inspect: List the currently open ports in the selected zone to check if the unused or unnecessary ports are closed: 41.8.7. Controlling traffic through the CLI You can use the firewall-cmd command to: disable networking traffic enable networking traffic As a result, you can for example enhance your system defenses, ensure data privacy or optimize network resources. Important Enabling panic mode stops all networking traffic. For this reason, it should be used only when you have the physical access to the machine or if you are logged in using a serial console. Procedure To immediately disable networking traffic, switch panic mode on: Switching off panic mode reverts the firewall to its permanent settings. To switch panic mode off, enter: Verification To see whether panic mode is switched on or off, use: 41.8.8. Controlling traffic with protocols using GUI To permit traffic through the firewall using a certain protocol, you can use the GUI. Prerequisites You installed the firewall-config package Procedure Start the firewall-config tool and select the network zone whose settings you want to change. Select the Protocols tab and click the Add button on the right-hand side. The Protocol window opens. Either select a protocol from the list or select the Other Protocol check box and enter the protocol in the field. 41.9. Using zones to manage incoming traffic depending on a source You can use zones to manage incoming traffic based on its source. Incoming traffic in this context is any data that is destined for your system, or passes through the host running firewalld . The source typically refers to the IP address or network range from which the traffic originates. As a result, you can sort incoming traffic and assign it to different zones to allow or disallow services that can be reached by that traffic. Matching by source address takes precedence over matching by interface name. When you add a source to a zone, the firewall will prioritize the source-based rules for incoming traffic over interface-based rules. This means that if incoming traffic matches a source address specified for a particular zone, the zone associated with that source address will determine how the traffic is handled, regardless of the interface through which it arrives. On the other hand, interface-based rules are generally a fallback for traffic that does not match specific source-based rules. These rules apply to traffic, for which the source is not explicitly associated with a zone. This allows you to define a default behavior for traffic that does not have a specific source-defined zone. 41.9.1. Adding a source To route incoming traffic into a specific zone, add the source to that zone. The source can be an IP address or an IP mask in the classless inter-domain routing (CIDR) notation. Note In case you add multiple zones with an overlapping network range, they are ordered alphanumerically by zone name and only the first one is considered. To set the source in the current zone: To set the source IP address for a specific zone: The following procedure allows all incoming traffic from 192.168.2.15 in the trusted zone: Procedure List all available zones: Add the source IP to the trusted zone in the permanent mode: Make the new settings persistent: 41.9.2. Removing a source When you remove a source from a zone, the traffic which originates from the source is no longer directed through the rules specified for that source. Instead, the traffic falls back to the rules and settings of the zone associated with the interface from which it originates, or goes to the default zone. Procedure List allowed sources for the required zone: Remove the source from the zone permanently: Make the new settings persistent: 41.9.3. Removing a source port By removing a source port you disable sorting the traffic based on a port of origin. Procedure To remove a source port: 41.9.4. Using zones and sources to allow a service for only a specific domain To allow traffic from a specific network to use a service on a machine, use zones and source. The following procedure allows only HTTP traffic from the 192.0.2.0/24 network while any other traffic is blocked. Warning When you configure this scenario, use a zone that has the default target. Using a zone that has the target set to ACCEPT is a security risk, because for traffic from 192.0.2.0/24 , all network connections would be accepted. Procedure List all available zones: Add the IP range to the internal zone to route the traffic originating from the source through the zone: Add the http service to the internal zone: Make the new settings persistent: Verification Check that the internal zone is active and that the service is allowed in it: Additional resources firewalld.zones(5) man page on your system 41.10. Filtering forwarded traffic between zones firewalld enables you to control the flow of network data between different firewalld zones. By defining rules and policies, you can manage how traffic is allowed or blocked when it moves between these zones. The policy objects feature provides forward and output filtering in firewalld . You can use firewalld to filter traffic between different zones to allow access to locally hosted VMs to connect the host. 41.10.1. The relationship between policy objects and zones Policy objects allow the user to attach firewalld's primitives such as services, ports, and rich rules to the policy. You can apply the policy objects to traffic that passes between zones in a stateful and unidirectional manner. HOST and ANY are the symbolic zones used in the ingress and egress zone lists. The HOST symbolic zone allows policies for the traffic originating from or has a destination to the host running firewalld. The ANY symbolic zone applies policy to all the current and future zones. ANY symbolic zone acts as a wildcard for all zones. 41.10.2. Using priorities to sort policies Multiple policies can apply to the same set of traffic, therefore, priorities should be used to create an order of precedence for the policies that may be applied. To set a priority to sort the policies: In the above example -500 is a lower priority value but has higher precedence. Thus, -500 will execute before -100. Lower numerical priority values have higher precedence and are applied first. 41.10.3. Using policy objects to filter traffic between locally hosted containers and a network physically connected to the host The policy objects feature allows users to filter traffic between Podman and firewalld zones. Note Red Hat recommends blocking all traffic by default and opening the selective services needed for the Podman utility. Procedure Create a new firewall policy: Block all traffic from Podman to other zones and allow only necessary services on Podman: Create a new Podman zone: Define the ingress zone for the policy: Define the egress zone for all other zones: Setting the egress zone to ANY means that you filter from Podman to other zones. If you want to filter to the host, then set the egress zone to HOST. Restart the firewalld service: Verification Verify the Podman firewall policy to other zones: 41.10.4. Setting the default target of policy objects You can specify --set-target options for policies. The following targets are available: ACCEPT - accepts the packet DROP - drops the unwanted packets REJECT - rejects unwanted packets with an ICMP reply CONTINUE (default) - packets will be subject to rules in following policies and zones. Verification Verify information about the policy 41.10.5. Using DNAT to forward HTTPS traffic to a different host If your web server runs in a DMZ with private IP addresses, you can configure destination network address translation (DNAT) to enable clients on the internet to connect to this web server. In this case, the host name of the web server resolves to the public IP address of the router. When a client establishes a connection to a defined port on the router, the router forwards the packets to the internal web server. Prerequisites The DNS server resolves the host name of the web server to the router's IP address. You know the following settings: The private IP address and port number that you want to forward The IP protocol to be used The destination IP address and port of the web server where you want to redirect the packets Procedure Create a firewall policy: The policies, as opposed to zones, allow packet filtering for input, output, and forwarded traffic. This is important, because forwarding traffic to endpoints on locally run web servers, containers, or virtual machines requires such capability. Configure symbolic zones for the ingress and egress traffic to also enable the router itself to connect to its local IP address and forward this traffic: The --add-ingress-zone=HOST option refers to packets generated locally and transmitted out of the local host. The --add-egress-zone=ANY option refers to traffic moving to any zone. Add a rich rule that forwards traffic to the web server: The rich rule forwards TCP traffic from port 443 on the IP address of the router (192.0.2.1) to port 443 of the IP address of the web server (192.51.100.20). Reload the firewall configuration files: Activate routing of 127.0.0.0/8 in the kernel: For persistent changes, run: The command persistently configures the route_localnet kernel parameter and ensures that the setting is preserved after the system reboots. For applying the settings immediately without a system reboot, run: The sysctl command is useful for applying on-the-fly changes, however the configuration will not persist across system reboots. Verification Connect to the IP address of the router and to the port that you have forwarded to the web server: Optional: Verify that the net.ipv4.conf.all.route_localnet kernel parameter is active: Verify that <example_policy> is active and contains the settings you need, especially the source IP address and port, protocol to be used, and the destination IP address and port: Additional resources firewall-cmd(1) , firewalld.policies(5) , firewalld.richlanguage(5) , sysctl(8) , and sysctl.conf(5) man pages on your system Using configuration files in /etc/sysctl.d/ to adjust kernel parameters 41.11. Configuring NAT using firewalld With firewalld , you can configure the following network address translation (NAT) types: Masquerading Destination NAT (DNAT) Redirect 41.11.1. Network address translation types These are the different network address translation (NAT) types: Masquerading Use one of these NAT types to change the source IP address of packets. For example, Internet Service Providers (ISPs) do not route private IP ranges, such as 10.0.0.0/8 . If you use private IP ranges in your network and users should be able to reach servers on the internet, map the source IP address of packets from these ranges to a public IP address. Masquerading automatically uses the IP address of the outgoing interface. Therefore, use masquerading if the outgoing interface uses a dynamic IP address. Destination NAT (DNAT) Use this NAT type to rewrite the destination address and port of incoming packets. For example, if your web server uses an IP address from a private IP range and is, therefore, not directly accessible from the internet, you can set a DNAT rule on the router to redirect incoming traffic to this server. Redirect This type is a special case of DNAT that redirects packets to a different port on the local machine. For example, if a service runs on a different port than its standard port, you can redirect incoming traffic from the standard port to this specific port. 41.11.2. Configuring IP address masquerading You can enable IP masquerading on your system. IP masquerading hides individual machines behind a gateway when accessing the internet. Procedure To check if IP masquerading is enabled (for example, for the external zone), enter the following command as root : The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise. If zone is omitted, the default zone will be used. To enable IP masquerading, enter the following command as root : To make this setting persistent, pass the --permanent option to the command. To disable IP masquerading, enter the following command as root : To make this setting permanent, pass the --permanent option to the command. 41.11.3. Using DNAT to forward incoming HTTP traffic You can use destination network address translation (DNAT) to direct incoming traffic from one destination address and port to another. Typically, this is useful for redirecting incoming requests from an external network interface to specific internal servers or services. Prerequisites The firewalld service is running. Procedure Create the /etc/sysctl.d/90-enable-IP-forwarding.conf file with the following content: This setting enables IP forwarding in the kernel. It makes the internal RHEL server act as a router and forward packets from network to network. Load the setting from the /etc/sysctl.d/90-enable-IP-forwarding.conf file: Forward incoming HTTP traffic: The command defines a DNAT rule with the following settings: --zone=public - The firewall zone for which you configure the DNAT rule. You can adjust this to whatever zone you need. --add-forward-port - The option that indicates you are adding a port-forwarding rule. port=80 - The external destination port. proto=tcp - The protocol indicating that you forward TCP traffic. toaddr=198.51.100.10 - The destination IP address. toport=8080 - The destination port of the internal server. --permanent - The option that makes the DNAT rule persistent across reboots. Reload the firewall configuration to apply the changes: Verification Verify the DNAT rule for the firewall zone that you used: Alternatively, view the corresponding XML configuration file: Additional resources Configuring kernel parameters at runtime firewall-cmd(1) manual page 41.11.4. Redirecting traffic from a non-standard port to make the web service accessible on a standard port You can use the redirect mechanism to make the web service that internally runs on a non-standard port accessible without requiring users to specify the port in the URL. As a result, the URLs are simpler and provide better browsing experience, while a non-standard port is still used internally or for specific requirements. Prerequisites The firewalld service is running. Procedure Create the /etc/sysctl.d/90-enable-IP-forwarding.conf file with the following content: This setting enables IP forwarding in the kernel. Load the setting from the /etc/sysctl.d/90-enable-IP-forwarding.conf file: Create the NAT redirect rule: The command defines the NAT redirect rule with the following settings: --zone=public - The firewall zone, for which you configure the rule. You can adjust this to whatever zone you need. --add-forward-port=port= <non_standard_port> - The option that indicates you are adding a port-forwarding (redirecting) rule with source port on which you initially receive the incoming traffic. proto=tcp - The protocol indicating that you redirect TCP traffic. toport= <standard_port> - The destination port, to which the incoming traffic should be redirected after being received on the source port. --permanent - The option that makes the rule persist across reboots. Reload the firewall configuration to apply the changes: Verification Verify the redirect rule for the firewall zone that you used: Alternatively, view the corresponding XML configuration file: Additional resources Configuring kernel parameters at runtime firewall-cmd(1) manual page 41.12. Managing ICMP requests The Internet Control Message Protocol ( ICMP ) is a supporting protocol that is used by various network devices for testing, troubleshooting, and diagnostics. ICMP differs from transport protocols such as TCP and UDP because it is not used to exchange data between systems. You can use the ICMP messages, especially echo-request and echo-reply , to reveal information about a network and misuse such information for various kinds of fraudulent activities. Therefore, firewalld enables controlling the ICMP requests to protect your network information. 41.12.1. Configuring ICMP filtering You can use ICMP filtering to define which ICMP types and codes you want the firewall to permit or deny from reaching your system. ICMP types and codes are specific categories and subcategories of ICMP messages. ICMP filtering helps, for example, in the following areas: Security enhancement - Block potentially harmful ICMP types and codes to reduce your attack surface. Network performance - Permit only necessary ICMP types to optimize network performance and prevent potential network congestion caused by excessive ICMP traffic. Troubleshooting control - Maintain essential ICMP functionality for network troubleshooting and block ICMP types that represent potential security risk. Prerequisites The firewalld service is running. Procedure List available ICMP types and codes: From this predefined list, select which ICMP types and codes to allow or block. Filter specific ICMP types by: Allowing ICMP types: The command removes any existing blocking rules for the echo requests ICMP type. Blocking ICMP types: The command ensures that the redirect messages ICMP type is blocked by the firewall. Reload the firewall configuration to apply the changes: Verification Verify your filtering rules are in effect: The command output displays the ICMP types and codes that you allowed or blocked. Additional resources firewall-cmd(1) manual page 41.13. Setting and controlling IP sets using firewalld IP sets are a RHEL feature for grouping of IP addresses and networks into sets to achieve more flexible and efficient firewall rule management. The IP sets are valuable in scenarios when you need to for example: Handle large lists of IP addresses Implement dynamic updates to those large lists of IP addresses Create custom IP-based policies to enhance network security and control Warning Red Hat recommends using the firewall-cmd command to create and manage IP sets. 41.13.1. Configuring dynamic updates for allowlisting with IP sets You can make near real-time updates to flexibly allow specific IP addresses or ranges in the IP sets even in unpredictable conditions. These updates can be triggered by various events, such as detection of security threats or changes in the network behavior. Typically, such a solution leverages automation to reduce manual effort and improve security by responding quickly to the situation. Prerequisites The firewalld service is running. Procedure Create an IP set with a meaningful name: The new IP set called allowlist contains IP addresses that you want your firewall to allow. Add a dynamic update to the IP set: This configuration updates the allowlist IP set with a newly added IP address that is allowed to pass network traffic by your firewall. Create a firewall rule that references the previously created IP set: Without this rule, the IP set would not have any impact on network traffic. The default firewall policy would prevail. Reload the firewall configuration to apply the changes: Verification List all IP sets: List the active rules: The sources section of the command-line output provides insights to what origins of traffic (hostnames, interfaces, IP sets, subnets, and others) are permitted or denied access to a particular firewall zone. In this case, the IP addresses contained in the allowlist IP set are allowed to pass traffic through the firewall for the public zone. Explore the contents of your IP set: steps Use a script or a security utility to fetch your threat intelligence feeds and update allowlist accordingly in an automated fashion. Additional resources firewall-cmd(1) manual page 41.14. Prioritizing rich rules By default, rich rules are organized based on their rule action. For example, deny rules have precedence over allow rules. The priority parameter in rich rules provides administrators fine-grained control over rich rules and their execution order. When using the priority parameter, rules are sorted first by their priority values in ascending order. When more rules have the same priority , their order is determined by the rule action, and if the action is also the same, the order may be undefined. 41.14.1. How the priority parameter organizes rules into different chains You can set the priority parameter in a rich rule to any number between -32768 and 32767 , and lower numerical values have higher precedence. The firewalld service organizes rules based on their priority value into different chains: Priority lower than 0: the rule is redirected into a chain with the _pre suffix. Priority higher than 0: the rule is redirected into a chain with the _post suffix. Priority equals 0: based on the action, the rule is redirected into a chain with the _log , _deny , or _allow the action. Inside these sub-chains, firewalld sorts the rules based on their priority value. 41.14.2. Setting the priority of a rich rule The following is an example of how to create a rich rule that uses the priority parameter to log all traffic that is not allowed or denied by other rules. You can use this rule to flag unexpected traffic. Procedure Add a rich rule with a very low precedence to log all traffic that has not been matched by other rules: The command additionally limits the number of log entries to 5 per minute. Verification Display the nftables rule that the command in the step created: 41.15. Configuring firewall lockdown Local applications or services are able to change the firewall configuration if they are running as root (for example, libvirt ). With this feature, the administrator can lock the firewall configuration so that either no applications or only applications that are added to the lockdown allow list are able to request firewall changes. The lockdown settings default to disabled. If enabled, the user can be sure that there are no unwanted configuration changes made to the firewall by local applications or services. 41.15.1. Configuring lockdown using CLI You can enable or disable the lockdown feature using the command line. Procedure To query whether lockdown is enabled: Manage lockdown configuration by either: Enabling lockdown: Disabling lockdown: 41.15.2. Overview of lockdown allowlist configuration files The default allowlist configuration file contains the NetworkManager context and the default context of libvirt . The user ID 0 is also on the list. The allowlist configuration files are stored in the /etc/firewalld/ directory. <?xml version="1.0" encoding="utf-8"?> <whitelist> <command name="/usr/bin/python3 -s /usr/bin/firewall-config"/> <selinux context="system_u:system_r:NetworkManager_t:s0"/> <selinux context="system_u:system_r:virtd_t:s0-s0:c0.c1023"/> <user id="0"/> </whitelist> Following is an example allowlist configuration file enabling all commands for the firewall-cmd utility, for a user called user whose user ID is 815 : <?xml version="1.0" encoding="utf-8"?> <whitelist> <command name="/usr/libexec/platform-python -s /bin/firewall-cmd*"/> <selinux context="system_u:system_r:NetworkManager_t:s0"/> <user id="815"/> <user name="user"/> </whitelist> This example shows both user id and user name , but only one option is required. Python is the interpreter and is prepended to the command line. In Red Hat Enterprise Linux, all utilities are placed in the /usr/bin/ directory and the /bin/ directory is sym-linked to the /usr/bin/ directory. In other words, although the path for firewall-cmd when entered as root might resolve to /bin/firewall-cmd , /usr/bin/firewall-cmd can now be used. All new scripts should use the new location. But be aware that if scripts that run as root are written to use the /bin/firewall-cmd path, then that command path must be added in the allowlist in addition to the /usr/bin/firewall-cmd path traditionally used only for non- root users. The * at the end of the name attribute of a command means that all commands that start with this string match. If the * is not there then the absolute command including arguments must match. 41.16. Enabling traffic forwarding between different interfaces or sources within a firewalld zone Intra-zone forwarding is a firewalld feature that enables traffic forwarding between interfaces or sources within a firewalld zone. 41.16.1. The difference between intra-zone forwarding and zones with the default target set to ACCEPT With intra-zone forwarding enabled, the traffic within a single firewalld zone can flow from one interface or source to another interface or source. The zone specifies the trust level of interfaces and sources. If the trust level is the same, the traffic stays inside the same zone. Note Enabling intra-zone forwarding in the default zone of firewalld , applies only to the interfaces and sources added to the current default zone. firewalld uses different zones to manage incoming and outgoing traffic. Each zone has its own set of rules and behaviors. For example, the trusted zone, allows all forwarded traffic by default. Other zones can have different default behaviors. In standard zones, forwarded traffic is typically dropped by default when the target of the zone is set to default . To control how the traffic is forwarded between different interfaces or sources within a zone, make sure you understand and configure the target of the zone accordingly. 41.16.2. Using intra-zone forwarding to forward traffic between an Ethernet and Wi-Fi network You can use intra-zone forwarding to forward traffic between interfaces and sources within the same firewalld zone. This feature brings the following benefits: Seamless connectivity between wired and wireless devices (you can forward traffic between an Ethernet network connected to enp1s0 and a Wi-Fi network connected to wlp0s20 ) Support for flexible work environments Shared resources that are accessible and used by multiple devices or users within a network (such as printers, databases, network-attached storage, and others) Efficient internal networking (such as smooth communication, reduced latency, resource accessibility, and others) You can enable this functionality for individual firewalld zones. Procedure Enable packet forwarding in the kernel: Ensure that interfaces between which you want to enable intra-zone forwarding are assigned only to the internal zone: If the interface is currently assigned to a zone other than internal , reassign it: Add the enp1s0 and wlp0s20 interfaces to the internal zone: Enable intra-zone forwarding: Verification The following Verification require that the nmap-ncat package is installed on both hosts. Log in to a host that is on the same network as the enp1s0 interface of the host on which you enabled zone forwarding. Start an echo service with ncat to test connectivity: Log in to a host that is in the same network as the wlp0s20 interface. Connect to the echo server running on the host that is in the same network as the enp1s0 : Type something and press Enter . Verify the text is sent back. Additional resources firewalld.zones(5) man page on your system 41.17. Configuring firewalld by using RHEL system roles RHEL system roles is a set of contents for the Ansible automation utility. This content together with the Ansible automation utility provides a consistent configuration interface to remotely manage multiple systems at once. The rhel-system-roles package contains the rhel-system-roles.firewall RHEL system role. This role was introduced for automated configurations of the firewalld service. With the firewall RHEL system role you can configure many different firewalld parameters, for example: Zones The services for which packets should be allowed Granting, rejection, or dropping of traffic access to ports Forwarding of ports or port ranges for a zone 41.17.1. Resetting the firewalld settings by using the firewall RHEL system role Over time, updates to your firewall configuration can accumulate to the point, where they could lead to unintended security risks. With the firewall RHEL system role, you can reset the firewalld settings to their default state in an automated fashion. This way you can efficiently remove any unintentional or insecure firewall rules and simplify their management. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - : replaced The settings specified in the example playbook include the following: : replaced Removes all existing user-defined settings and resets the firewalld settings to defaults. If you combine the :replaced parameter with other settings, the firewall role removes all existing settings before applying new ones. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Run this command on the control node to remotely check that all firewall configuration on your managed node was reset to its default values: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory 41.17.2. Forwarding incoming traffic in firewalld from one local port to a different local port by using the firewall RHEL system role You can use the firewall RHEL system role to remotely configure forwarding of incoming traffic from one local port to a different local port. For example, if you have an environment where multiple services co-exist on the same machine and need the same default port, there are likely to become port conflicts. These conflicts can disrupt services and cause a downtime. With the firewall RHEL system role, you can efficiently forward traffic to alternative ports to ensure that your services can run simultaneously without modification to their configuration. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true The settings specified in the example playbook include the following: forward_port: 8080/tcp;443 Traffic coming to the local port 8080 using the TCP protocol is forwarded to the port 443. runtime: true Enables changes in the runtime configuration. The default is set to true . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the control node, run the following command to remotely check the forwarded-ports on your managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory 41.17.3. Configuring a firewalld DMZ zone by using the firewall RHEL system role As a system administrator, you can use the firewall RHEL system role to configure a dmz zone on the enp1s0 interface to permit HTTPS traffic to the zone. In this way, you enable external users to access your web servers. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the control node, run the following command to remotely check the information about the dmz zone on your managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.firewall/README.md file /usr/share/doc/rhel-system-roles/firewall/ directory
[ "<?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>My Zone</short> <description>Here you can describe the characteristic features of the zone.</description> <service name=\"ssh\"/> <port protocol=\"udp\" port=\"1025-65535\"/> <port protocol=\"tcp\" port=\"1025-65535\"/> </zone>", "firewall-cmd --get-zones", "firewall-cmd --add-service=ssh --zone= <your_chosen_zone> firewall-cmd --remove-service=ftp --zone= <same_chosen_zone>", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <your_chosen_zone> --change-interface=< interface_name > --permanent", "firewall-cmd --zone= <your_chosen_zone> --list-all", "firewall-cmd --get-default-zone", "firewall-cmd --set-default-zone <zone_name >", "firewall-cmd --get-active-zones", "firewall-cmd --zone= zone_name --change-interface= interface_name --permanent", "nmcli connection modify profile connection.zone zone_name", "nmcli connection up profile", "nmcli -f NAME,FILENAME connection NAME FILENAME enp1s0 /etc/NetworkManager/system-connections/enp1s0.nmconnection enp7s0 /etc/sysconfig/network-scripts/ifcfg-enp7s0", "[connection] zone=internal", "ZONE=internal", "nmcli connection reload", "nmcli connection up <profile_name>", "firewall-cmd --get-zone-of-interface enp1s0 internal", "ZONE= zone_name", "firewall-cmd --permanent --new-zone= zone-name", "firewall-cmd --reload", "firewall-cmd --get-zones --permanent", "firewall-cmd --zone= zone-name --list-all", "firewall-cmd --permanent --zone=zone-name --set-target=<default|ACCEPT|REJECT|DROP>", "firewall-cmd --list-services ssh dhcpv6-client", "firewall-cmd --get-services RH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6 dhcpv6-client dns docker-registry", "firewall-cmd --add-service= <service_name>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all --permanent public target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:", "firewall-cmd --check-config success", "firewall-cmd --check-config Error: INVALID_PROTOCOL: 'public.xml': 'tcpx' not from {'tcp'|'udp'|'sctp'|'dccp'}", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_name> --add-service=https --permanent", "firewall-cmd --reload", "firewall-cmd --zone= <zone_name> --list-all", "firewall-cmd --zone= <zone_name> --list-services", "firewall-cmd --list-ports", "firewall-cmd --remove-port=port-number/port-type", "firewall-cmd --runtime-to-permanent", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_to_inspect> --list-ports", "firewall-cmd --panic-on", "firewall-cmd --panic-off", "firewall-cmd --query-panic", "firewall-cmd --add-source=<source>", "firewall-cmd --zone=zone-name --add-source=<source>", "firewall-cmd --get-zones", "firewall-cmd --zone=trusted --add-source=192.168.2.15", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --list-sources", "firewall-cmd --zone=zone-name --remove-source=<source>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --remove-source-port=<port-name>/<tcp|udp|sctp|dccp>", "firewall-cmd --get-zones block dmz drop external home internal public trusted work", "firewall-cmd --zone=internal --add-source=192.0.2.0/24", "firewall-cmd --zone=internal --add-service=http", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=internal --list-all internal (active) target: default icmp-block-inversion: no interfaces: sources: 192.0.2.0/24 services: cockpit dhcpv6-client mdns samba-client ssh http", "firewall-cmd --permanent --new-policy myOutputPolicy firewall-cmd --permanent --policy myOutputPolicy --add-ingress-zone HOST firewall-cmd --permanent --policy myOutputPolicy --add-egress-zone ANY", "firewall-cmd --permanent --policy mypolicy --set-priority -500", "firewall-cmd --permanent --new-policy podmanToAny", "firewall-cmd --permanent --policy podmanToAny --set-target REJECT firewall-cmd --permanent --policy podmanToAny --add-service dhcp firewall-cmd --permanent --policy podmanToAny --add-service dns firewall-cmd --permanent --policy podmanToAny --add-service https", "firewall-cmd --permanent --new-zone=podman", "firewall-cmd --permanent --policy podmanToHost --add-ingress-zone podman", "firewall-cmd --permanent --policy podmanToHost --add-egress-zone ANY", "systemctl restart firewalld", "firewall-cmd --info-policy podmanToAny podmanToAny (active) target: REJECT ingress-zones: podman egress-zones: ANY services: dhcp dns https", "firewall-cmd --permanent --policy mypolicy --set-target CONTINUE", "firewall-cmd --info-policy mypolicy", "firewall-cmd --permanent --new-policy <example_policy>", "firewall-cmd --permanent --policy= <example_policy> --add-ingress-zone=HOST firewall-cmd --permanent --policy= <example_policy> --add-egress-zone=ANY", "firewall-cmd --permanent --policy= <example_policy> --add-rich-rule='rule family=\"ipv4\" destination address=\" 192.0.2.1 \" forward-port port=\" 443 \" protocol=\"tcp\" to-port=\" 443 \" to-addr=\" 192.51.100.20 \"'", "firewall-cmd --reload success", "echo \"net.ipv4.conf.all.route_localnet=1\" > /etc/sysctl.d/90-enable-route-localnet.conf", "sysctl -p /etc/sysctl.d/90-enable-route-localnet.conf", "curl https://192.0.2.1:443", "sysctl net.ipv4.conf.all.route_localnet net.ipv4.conf.all.route_localnet = 1", "firewall-cmd --info-policy= <example_policy> example_policy (active) priority: -1 target: CONTINUE ingress-zones: HOST egress-zones: ANY services: ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family=\"ipv4\" destination address=\"192.0.2.1\" forward-port port=\"443\" protocol=\"tcp\" to-port=\"443\" to-addr=\"192.51.100.20\"", "firewall-cmd --zone= external --query-masquerade", "firewall-cmd --zone= external --add-masquerade", "firewall-cmd --zone= external --remove-masquerade", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toaddr=198.51.100.10:toport=8080 --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports --zone=public port=80:proto=tcp:toport=8080:toaddr=198.51.100.10", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"80\" protocol=\"tcp\" to-port=\"8080\" to-addr=\"198.51.100.10\"/> <forward/> </zone>", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port= <standard_port> :proto=tcp:toport= <non_standard_port> --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports port=8080:proto=tcp:toport=80:toaddr=", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"8080\" protocol=\"tcp\" to-port=\"80\"/> <forward/> </zone>", "firewall-cmd --get-icmptypes address-unreachable bad-header beyond-scope communication-prohibited destination-unreachable echo-reply echo-request failed-policy fragmentation-needed host-precedence-violation host-prohibited host-redirect host-unknown host-unreachable", "firewall-cmd --zone= <target-zone> --remove-icmp-block= echo-request --permanent", "firewall-cmd --zone= <target-zone> --add-icmp-block= redirect --permanent", "firewall-cmd --reload", "firewall-cmd --list-icmp-blocks redirect", "firewall-cmd --permanent --new-ipset= allowlist --type=hash:ip", "firewall-cmd --permanent --ipset= allowlist --add-entry= 198.51.100.10", "firewall-cmd --permanent --zone=public --add-source=ipset: allowlist", "firewall-cmd --reload", "firewall-cmd --get-ipsets allowlist", "firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: enp0s1 sources: ipset:allowlist services: cockpit dhcpv6-client ssh ports: protocols:", "cat /etc/firewalld/ipsets/allowlist.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <ipset type=\"hash:ip\"> <entry>198.51.100.10</entry> </ipset>", "firewall-cmd --add-rich-rule='rule priority=32767 log prefix=\"UNEXPECTED: \" limit value=\"5/m\"'", "nft list chain inet firewalld filter_IN_public_post table inet firewalld { chain filter_IN_public_post { log prefix \"UNEXPECTED: \" limit rate 5/minute } }", "firewall-cmd --query-lockdown", "firewall-cmd --lockdown-on", "firewall-cmd --lockdown-off", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/bin/python3 -s /usr/bin/firewall-config\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <selinux context=\"system_u:system_r:virtd_t:s0-s0:c0.c1023\"/> <user id=\"0\"/> </whitelist>", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/libexec/platform-python -s /bin/firewall-cmd*\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <user id=\"815\"/> <user name=\"user\"/> </whitelist>", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "firewall-cmd --get-active-zones", "firewall-cmd --zone=internal --change-interface= interface_name --permanent", "firewall-cmd --zone=internal --add-interface=enp1s0 --add-interface=wlp0s20", "firewall-cmd --zone=internal --add-forward", "ncat -e /usr/bin/cat -l 12345", "ncat <other_host> 12345", "--- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - previous: replaced", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports' managed-node-01.example.com | CHANGED | rc=0 >> port=8080:proto=tcp:toport=443:toaddr=", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --zone=dmz --list-all' managed-node-01.example.com | CHANGED | rc=0 >> dmz (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: https ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/using-and-configuring-firewalld_configuring-and-managing-networking