title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 3. Debug Parameters
Chapter 3. Debug Parameters These parameters allow you to set debug mode on a per-service basis. The Debug parameter acts as a global parameter for all services and the per-service parameters can override the effects of global parameter on individual services. Parameter Description BarbicanDebug Set to True to enable debugging OpenStack Key Manager (barbican) service. CinderDebug Set to True to enable debugging on OpenStack Block Storage (cinder) services. ConfigDebug Whether to run configuration management (e.g. Puppet) in debug mode. The default value is False . Debug Set to True to enable debugging on all services. The default value is False . GlanceDebug Set to True to enable debugging OpenStack Image Storage (glance) service. HeatDebug Set to True to enable debugging OpenStack Orchestration (heat) services. HorizonDebug Set to True to enable debugging OpenStack Dashboard (horizon) service. The default value is False . IronicDebug Set to True to enable debugging OpenStack Bare Metal (ironic) services. KeystoneDebug Set to True to enable debugging OpenStack Identity (keystone) service. ManilaDebug Set to True to enable debugging OpenStack Shared File Systems (manila) services. NeutronDebug Set to True to enable debugging OpenStack Networking (neutron) services. NovaDebug Set to True to enable debugging OpenStack Compute (nova) services. SaharaDebug Set to True to enable debugging OpenStack Clustering (sahara) services.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/debug-parameters
8.2.5. Removing Packages
8.2.5. Removing Packages Similarly to package installation, Yum allows you to uninstall (remove in RPM and Yum terminology) both individual packages and a package group. Removing Individual Packages To uninstall a particular package, as well as any packages that depend on it, run the following command as root : yum remove package_name As when you install multiple packages, you can remove several at once by adding more package names to the command. For example, to remove totem , rhythmbox , and sound-juicer , type the following at a shell prompt: Similar to install , remove can take these arguments: package names glob expressions file lists package provides Warning Yum is not able to remove a package without also removing packages which depend on it. This type of operation can only be performed by RPM , is not advised, and can potentially leave your system in a non-functioning state or cause applications to misbehave and/or crash. For further information, see Section B.2.4, "Uninstalling" in the RPM chapter. Removing a Package Group You can remove a package group using syntax congruent with the install syntax: yum groupremove group yum remove @ group The following are alternative but equivalent ways of removing the KDE Desktop group: Important When you tell yum to remove a package group, it will remove every package in that group, even if those packages are members of other package groups or dependencies of other installed packages. However, you can instruct yum to remove only those packages which are not required by any other packages or groups by adding the groupremove_leaf_only=1 directive to the [main] section of the /etc/yum.conf configuration file. For more information on this directive, see Section 8.4.1, "Setting [main] Options" .
[ "~]# yum remove totem rhythmbox sound-juicer", "~]# yum groupremove \"KDE Desktop\" ~]# yum groupremove kde-desktop ~]# yum remove @kde-desktop" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Removing
Chapter 10. Pre-installation validation
Chapter 10. Pre-installation validation 10.1. Definition of pre-installation validations The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation. The Assisted Installer will use the information provided prior to installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install. When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer. The Assisted Installer uses all of this information to compute real time pre-installation validations. All validations are either blocking or non-blocking to the installation. 10.2. Blocking and non blocking validations A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed. A non blocking validation is a warning and will tell you of things that might cause you a problem. 10.3. Validation types The Assisted Installer performs two types of validation: Host Host validations ensure that the configuration of a given host is valid for installation. Cluster Cluster validations ensure that the configuration of the whole cluster is valid for installation. 10.4. Host validations 10.4.1. Getting host validations by using the REST API Note If you use the web based UI, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You have hosts booted with the discovery ISO You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[])' Get non-passing validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[]) | map(select(.status=="failure" or .status=="pending")) | select(length>0)' 10.4.2. Host validations in detail Parameter Validation type Description connected non-blocking Checks that the host has recently communicated with the Assisted Installer. has-inventory non-blocking Checks that the Assisted Installer received the inventory from the host. has-min-cpu-cores non-blocking Checks that the number of CPU cores meets the minimum requirements. has-min-memory non-blocking Checks that the amount of memory meets the minimum requirements. has-min-valid-disks non-blocking Checks that at least one available disk meets the eligibility criteria. has-cpu-cores-for-role blocking Checks that the number of cores meets the minimum requirements for the host role. has-memory-for-role blocking Checks that the amount of memory meets the minimum requirements for the host role. ignition-downloadable blocking For day 2 hosts, checks that the host can download ignition configuration from the day 1 cluster. belongs-to-majority-group blocking The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, day 1 cluster are in the majority group. valid-platform-network-settings blocking Checks that the platform is valid for the network settings. ntp-synced non-blocking Checks if an NTP server has been successfully used to synchronize time on the host. container-images-available non-blocking Checks if container images have been successfully pulled from the image registry. sufficient-installation-disk-speed blocking Checks that disk speed metrics from an earlier installation meet requirements, if they exist. sufficient-network-latency-requirement-for-role blocking Checks that the average network latency between hosts in the cluster meets the requirements. sufficient-packet-loss-requirement-for-role blocking Checks that the network packet loss between hosts in the cluster meets the requirements. has-default-route blocking Checks that the host has a default route configured. api-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster. api-int-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster. apps-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster. compatible-with-cluster-platform non-blocking Checks that the host is compatible with the cluster platform dns-wildcard-not-configured blocking Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift disk-encryption-requirements-satisfied non-blocking Checks that the type of host and disk encryption configured meet the requirements. non-overlapping-subnets blocking Checks that this host does not have any overlapping subnets. hostname-unique blocking Checks that the hostname is unique in the cluster. hostname-valid blocking Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden. belongs-to-machine-cidr blocking Checks that the host IP is in the address range of the machine CIDR. lso-requirements-satisfied blocking Validates that the cluster meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the cluster meets the requirements of the Openshift Data Foundation Operator. The cluster has a minimum of 3 hosts. The cluster has only 3 masters or a minimum of 3 workers. The cluster has 3 eligible disks and each host must have an eligible disk. The host role must not be "Auto Assign" for clusters with more than three hosts. cnv-requirements-satisfied blocking Validates that the cluster meets the requirements of Container Native Virtualization. The BIOS of the host must have CPU virtualization enabled. Host must have enough CPU cores and RAM available for Container Native Virtualization. Will validate the Host Path Provisioner if necessary. lvm-requirements-satisfied blocking Validates that the cluster meets the requirements of the Logical Volume Manager Operator. Host has at least one additional empty disk, not partitioned and not formatted. vsphere-disk-uuid-enabled non-blocking Verifies that each valid disk sets disk.EnableUUID to true . In VSphere this will result in each disk having a UUID. compatible-agent blocking Checks that the discovery agent version is compatible with the agent docker image version. no-skip-installation-disk blocking Checks that installation disk is not skipping disk formatting. no-skip-missing-disk blocking Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that. media-connected blocking Checks the connection of the installation media to the host. machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. id-platform-network-settings blocking Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking. 10.5. Cluster validations 10.5.1. Getting cluster validations by using the REST API Note: If you use the web based UI, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq 'map(.[])' Get non-passing cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq '. | map(.[] | select(.status=="failure" or .status=="pending")) | select(length>0)' 10.5.2. Cluster validations in detail Parameter Validation type Description machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. cluster-cidr-defined non-blocking Checks that the cluster network definition exists for the cluster. service-cidr-defined non-blocking Checks that the service network definition exists for the cluster. no-cidrs-overlapping blocking Checks that the defined networks do not overlap. networks-same-address-families blocking Checks that the defined networks share the same address families (valid address families are IPv4, IPv6) network-prefix-valid blocking Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts. machine-cidr-equals-to-calculated-cidr blocking For a non user managed networking cluster. Checks that apiVIPs or ingressVIPs are members of the machine CIDR if they exist. api-vips-defined non-blocking For a non user managed networking cluster. Checks that apiVIPs exist. api-vips-valid blocking For a non user managed networking cluster. Checks if the apiVIPs belong to the machine CIDR and are not in use. ingress-vips-defined blocking For a non user managed networking cluster. Checks that ingressVIPs exist. ingress-vips-valid non-blocking For a non user managed networking cluster. Checks if the ingressVIPs belong to the machine CIDR and are not in use. all-hosts-are-ready-to-install blocking Checks that all hosts in the cluster are in the "ready to install" status. sufficient-masters-count blocking This validation only applies to multi-node clusters. The cluster must have exactly three masters. If the cluster has worker nodes, a minimum of 2 worker nodes must exist. dns-domain-defined non-blocking Checks that the base DNS domain exists for the cluster. pull-secret-set non-blocking Checks that the pull secret exists. Does not check that the pull secret is valid or authorized. ntp-server-configured blocking Checks that each of the host clocks are no more than 4 minutes out of sync with each other. lso-requirements-satisfied blocking Validates that the cluster meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the cluster meets the requirements of the Openshift Data Foundation Operator. The cluster has a minimum of 3 hosts. The cluster has only 3 masters or a minimum of 3 workers. The cluster has 3 eligible disks and each host must have an eligible disk. cnv-requirements-satisfied blocking Validates that the cluster meets the requirements of Container Native Virtualization. The CPU architecture for the cluster is x86 lvm-requirements-satisfied blocking Validates that the cluster meets the requirements of the Logical Volume Manager Operator. The cluster must be single node. The cluster must be running Openshift >= 4.11.0. network-type-valid blocking Checks the validity of the network type if it exists. The network type must be OpenshiftSDN or OVNKubernetes. OpenshiftSDN does not support IPv6 or Single Node Openshift. OVNKubernetes does not support VIP DHCP allocation.
[ "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[]) | map(select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'", "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq '. | map(.[] | select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'" ]
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/assembly_pre-installation-validation
Chapter 133. KafkaBridge schema reference
Chapter 133. KafkaBridge schema reference Property Property type Description spec KafkaBridgeSpec The specification of the Kafka Bridge. status KafkaBridgeStatus The status of the Kafka Bridge.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaBridge-reference
Installation overview
Installation overview OpenShift Container Platform 4.15 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_overview/index
8.19. bfa-firmware
8.19. bfa-firmware 8.19.1. RHBA-2014:1486 - bfa-firmware bug fix and enhancement update Updated bfa-firmware package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The bfa-firmware package contains the Brocade Fibre Channel Host Bus Adapter (HBA) Firmware to run Brocade Fibre Channel and CNA adapters. This package also supports the Brocade BNA network adapter. Note The bfa-firmware package has been upgraded to upstream version 3.2.23, which provides a number of bug fixes and enhancements over the version. (BZ# 1054467 ) All users of bfa-firmware are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/bfa-firmware
Preface
Preface Red Hat offers administrators tools for gathering data for your Red Hat Quay deployment. You can use this data to troubleshoot your Red Hat Quay deployment yourself, or file a support ticket.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/troubleshooting_red_hat_quay/pr01
Chapter 7. Exposing the RHACS portal over HTTP
Chapter 7. Exposing the RHACS portal over HTTP Enable an unencrypted HTTP server to expose the RHACS portal through ingress controllers, Layer 7 load balancers, Istio, or other solutions. If you use an ingress controller, Istio, or a Layer 7 load balancer that prefers unencrypted HTTP back ends, you can configure Red Hat Advanced Cluster Security for Kubernetes to expose the RHACS portal over HTTP. Doing this makes the RHACS portal available over a plaintext back end. Important To expose the RHACS portal over HTTP, you must be using an ingress controller, a Layer 7 load balancer, or Istio to encrypt external traffic with HTTPS. It is insecure to expose the RHACS portal directly to external clients by using plain HTTP. You can expose the RHACS portal over HTTP during installation or on an existing deployment. 7.1. Prerequisites To specify an HTTP endpoint you must use an <endpoints_spec> . It is a comma-separated list of single endpoint specifications in the form of <type>@<addr>:<port> , where: type is grpc or http . Using http as type works in most use cases. For advanced use cases, you can either use grpc or omit its value. If you omit the value for type , you can configure two endpoints in your proxy, one for gRPC and the other for HTTP. Both these endpoints point to the same exposed HTTP port on Central. However, most proxies do not support carrying both gRPC and HTTP traffic on the same external port. addr is the IP address to expose Central on. You can omit this, or use localhost or 127.0.0.1 if you need an HTTP endpoint which is only accessible by using port-forwarding. port is the port to expose Central on. The following are several valid <endpoints_spec> values: 8080 http@8080 :8081 grpc@:8081 localhost:8080 http@localhost:8080 http@8080,grpc@8081 8080, grpc@:8081, [email protected]:8082 7.2. Exposing the RHACS portal over HTTP during the installation If you are installing Red Hat Advanced Cluster Security for Kubernetes using the roxctl CLI, use the --plaintext-endpoints option with the roxctl central generate interactive command to enable the HTTP server during the installation. Procedure Run the following command to specify an HTTP endpoint during the interactive installation process: USD roxctl central generate interactive \ --plaintext-endpoints=<endpoints_spec> 1 1 Endpoint specifications in the form of <type>@<addr>:<port> . See the Prerequisites section for details. 7.3. Exposing the RHACS portal over HTTP for an existing deployment You can enable the HTTP server on an existing Red Hat Advanced Cluster Security for Kubernetes deployment. Procedure Create a patch and define a ROX_PLAINTEXT_ENDPOINTS environment variable: USD CENTRAL_PLAINTEXT_PATCH=' spec: template: spec: containers: - name: central env: - name: ROX_PLAINTEXT_ENDPOINTS value: <endpoints_spec> 1 ' 1 Endpoint specifications in the form of <type>@<addr>:<port> . See the Prerequisites section for details. Add the ROX_PLAINTEXT_ENDPOINTS environment variable to the Central deployment: USD oc -n stackrox patch deploy/central -p "USDCENTRAL_PLAINTEXT_PATCH"
[ "roxctl central generate interactive --plaintext-endpoints=<endpoints_spec> 1", "CENTRAL_PLAINTEXT_PATCH=' spec: template: spec: containers: - name: central env: - name: ROX_PLAINTEXT_ENDPOINTS value: <endpoints_spec> 1 '", "oc -n stackrox patch deploy/central -p \"USDCENTRAL_PLAINTEXT_PATCH\"" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/configuring/expose-portal-over-http
Chapter 9. FeatureGate [config.openshift.io/v1]
Chapter 9. FeatureGate [config.openshift.io/v1] Description Feature holds cluster-wide information about feature gates. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 9.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description customNoUpgrade `` customNoUpgrade allows the enabling or disabling of any feature. Turning this feature set on IS NOT SUPPORTED, CANNOT BE UNDONE, and PREVENTS UPGRADES. Because of its nature, this setting cannot be validated. If you have any typos or accidentally apply invalid combinations your cluster may fail in an unrecoverable way. featureSet must equal "CustomNoUpgrade" must be set to use this field. featureSet string featureSet changes the list of features in the cluster. The default is empty. Be very careful adjusting this setting. Turning on or off features may cause irreversible changes in your cluster which cannot be undone. 9.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions represent the observations of the current state. Known .status.conditions.type are: "DeterminationDegraded" conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } featureGates array featureGates contains a list of enabled and disabled featureGates that are keyed by payloadVersion. Operators other than the CVO and cluster-config-operator, must read the .status.featureGates, locate the version they are managing, find the enabled/disabled featuregates and make the operand and operator match. The enabled/disabled values for a particular version may change during the life of the cluster as various .spec.featureSet values are selected. Operators may choose to restart their processes to pick up these changes, but remembering past enable/disable lists is beyond the scope of this API and is the responsibility of individual operators. Only featureGates with .version in the ClusterVersion.status will be present in this list. featureGates[] object 9.1.3. .status.conditions Description conditions represent the observations of the current state. Known .status.conditions.type are: "DeterminationDegraded" Type array 9.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 9.1.5. .status.featureGates Description featureGates contains a list of enabled and disabled featureGates that are keyed by payloadVersion. Operators other than the CVO and cluster-config-operator, must read the .status.featureGates, locate the version they are managing, find the enabled/disabled featuregates and make the operand and operator match. The enabled/disabled values for a particular version may change during the life of the cluster as various .spec.featureSet values are selected. Operators may choose to restart their processes to pick up these changes, but remembering past enable/disable lists is beyond the scope of this API and is the responsibility of individual operators. Only featureGates with .version in the ClusterVersion.status will be present in this list. Type array 9.1.6. .status.featureGates[] Description Type object Required version Property Type Description disabled array disabled is a list of all feature gates that are disabled in the cluster for the named version. disabled[] object enabled array enabled is a list of all feature gates that are enabled in the cluster for the named version. enabled[] object version string version matches the version provided by the ClusterVersion and in the ClusterOperator.Status.Versions field. 9.1.7. .status.featureGates[].disabled Description disabled is a list of all feature gates that are disabled in the cluster for the named version. Type array 9.1.8. .status.featureGates[].disabled[] Description Type object Required name Property Type Description name string name is the name of the FeatureGate. 9.1.9. .status.featureGates[].enabled Description enabled is a list of all feature gates that are enabled in the cluster for the named version. Type array 9.1.10. .status.featureGates[].enabled[] Description Type object Required name Property Type Description name string name is the name of the FeatureGate. 9.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/featuregates DELETE : delete collection of FeatureGate GET : list objects of kind FeatureGate POST : create a FeatureGate /apis/config.openshift.io/v1/featuregates/{name} DELETE : delete a FeatureGate GET : read the specified FeatureGate PATCH : partially update the specified FeatureGate PUT : replace the specified FeatureGate /apis/config.openshift.io/v1/featuregates/{name}/status GET : read status of the specified FeatureGate PATCH : partially update status of the specified FeatureGate PUT : replace status of the specified FeatureGate 9.2.1. /apis/config.openshift.io/v1/featuregates Table 9.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of FeatureGate Table 9.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind FeatureGate Table 9.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.5. HTTP responses HTTP code Reponse body 200 - OK FeatureGateList schema 401 - Unauthorized Empty HTTP method POST Description create a FeatureGate Table 9.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.7. Body parameters Parameter Type Description body FeatureGate schema Table 9.8. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 201 - Created FeatureGate schema 202 - Accepted FeatureGate schema 401 - Unauthorized Empty 9.2.2. /apis/config.openshift.io/v1/featuregates/{name} Table 9.9. Global path parameters Parameter Type Description name string name of the FeatureGate Table 9.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a FeatureGate Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.12. Body parameters Parameter Type Description body DeleteOptions schema Table 9.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified FeatureGate Table 9.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.15. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified FeatureGate Table 9.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.17. Body parameters Parameter Type Description body Patch schema Table 9.18. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified FeatureGate Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body FeatureGate schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 201 - Created FeatureGate schema 401 - Unauthorized Empty 9.2.3. /apis/config.openshift.io/v1/featuregates/{name}/status Table 9.22. Global path parameters Parameter Type Description name string name of the FeatureGate Table 9.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified FeatureGate Table 9.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 9.25. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified FeatureGate Table 9.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.27. Body parameters Parameter Type Description body Patch schema Table 9.28. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified FeatureGate Table 9.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.30. Body parameters Parameter Type Description body FeatureGate schema Table 9.31. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 201 - Created FeatureGate schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/featuregate-config-openshift-io-v1
5.313. strace
5.313. strace 5.313.1. RHBA-2012:1317 - strace bug fix and enhancement update Updated strace packages that fix a bug are now available for Red Hat Enterprise Linux 6. The strace packages provide an utility to intercept and record the system calls called and received by a running process. The strace utility can print a record of each system call, its arguments, and its return value. The strace utility is useful for diagnosing, debugging, and instructional purposes. Bug Fix BZ# 849052 Previously, the strace utility used magic breakpoints in the process startup code to detect and control process startup. Consequently, under certain circumstances, the %ebx register could be corrupted around the clone syscall within the libc_fork() function, which could cause an application to terminate unexpectedly with a segmentation fault while under strace control. This update changes strace to use the TRACE{FORK,VFORK,CLONE} ptrace capabilities which provide a cleaner, less error-prone interface to monitor and control process startup when tracing, thus preventing this bug. All users of strace are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/strace
Chapter 4. Configuring the instance environment
Chapter 4. Configuring the instance environment You can configure the following in Administration view: General Credentials Repositories Proxy (HTTP and HTTPS proxy settings) Custom migration targets Issue management Assessment questionnaires 4.1. General You can enable or disable the following option: Allow reports to be downloaded after running an analysis 4.2. Configuring credentials You can configure the following types of credentials in Administration view: Source control Maven settings file Proxy Basic auth (Jira) Bearer token (Jira) 4.2.1. Configuring source control credentials You can configure source control credentials in the Credentials view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Credentials . Click Create new . Enter the following information: Name Description (Optional) In the Type list, select Source Control . In the User credentials list, select Credential Type and enter the requested information: Username/Password Username Password (hidden) SCM Private Key/Passphrase SCM Private Key Private Key Passphrase (hidden) Note Type-specific credential information such as keys and passphrases is either hidden or shown as [Encrypted]. Click Create . MTA validates the input and creates a new credential. SCM keys must be parsed and checked for validity. If the validation fails, the following error message is displayed: "not a valid key/XML file" . 4.2.2. Configuring Maven credentials You can configure new Maven credentials in the Credentials view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Credentials . Click Create new . Enter the following information: Name Description (Optional) In the Type list, select Maven Settings File. Upload the settings file or paste its contents. Click Create . MTA validates the input and creates a new credential. The Maven settings.xml file must be parsed and checked for validity. If the validation fails, the following error message is displayed: "not a valid key/XML file" . 4.2.3. Configuring proxy credentials You can configure proxy credentials in the Credentials view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Credentials . Click Create new . Enter the following information: Name Description (Optional) In the Type list, select Proxy . Enter the following information. Username Password Note Type-specific credential information such as keys and passphrases is either hidden or shown as [Encrypted]. Click Create . MTA validates the input and creates a new credential. 4.3. Configuring repositories You can configure the following types of repositories in Administration view: Git Subversion Maven 4.3.1. Configuring Git repositories You can configure Git repositories in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Repositories and then click Git . Toggle the Consume insecure Git repositories switch to the right. 4.3.2. Configuring subversion repositories You can configure subversion repositories in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Repositories and then click Subversion . Toggle the Consume insecure Subversion repositories switch to the right. 4.3.3. Configuring a Maven repository and reducing its size You can use the MTA user interface to both configure a Maven repository and to reduce its size. 4.3.3.1. Configuring a Maven repository You can configure a Maven repository in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Repositories and then click Maven . Toggle the Consume insecure artifact repositories switch to the right. 4.3.3.2. Reducing the size of a Maven repository You can reduce the size of a Maven repository in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Note If the rwx_supported configuration option of the Tackle CR is set to false , both the Local artifact repository field and the Clear repository button are disabled and this procedure is not possible. Procedure In Administration view, click Repositories and then click Maven . Click the Clear repository link. Note Depending on the size of the repository, the size change may not be evident despite the function working properly. 4.4. Configuring HTTP and HTTPS proxy settings You can configure HTTP and HTTPS proxy settings with this management module. Procedure In the Administration view, click Proxy . Toggle HTTP proxy or HTTPS proxy to enable the proxy connection. Enter the following information: Proxy host Proxy port Optional: Toggle HTTP proxy credentials or HTTPS proxy credentials to enable authentication. Click Insert . 4.5. Creating custom migration targets Architects or users with admin permissions can create and maintain custom rulesets associated with custom migration targets. Architects can upload custom rule files and assign them to various custom migration targets. The custom migration targets can then be selected in the analysis configuration wizard. By using ready-made custom migration targets, you can avoid configuring custom rules for each analysis run. This simplifies analysis configuration and execution for non-admin users or third-party developers. Prerequisites You are logged in as a user with admin permissions. Procedure In the Administration view, click Custom migration targets . Click Create new . Enter the name and description of the target. In the Image section, upload an image file for the target's icon. The file can be in either the PNG or JPEG format, up to 1 MB. If you do not upload any file, a default icon is used. In the Custom rules section, select either Upload manually or Retrieve from a repository : If you selected Upload manually , upload or drag and drop the required rule files from your local drive. If you selected Retrieve from a repository , complete the following steps: Choose Git or Subversion . Enter the Source repository , Branch , and Root path fields. If the repository requires credentials, enter these credentials in the Associated credentials field. Click Create . The new migration target appears on the Custom migration targets page. It can now be used by non-admin users in the Migration view. 4.6. Seeding an instance If you are a project architect, you can configure the instance's key parameters in the Controls window, before migration. The parameters can be added and edited as needed. The following parameters define applications, individuals, teams, verticals or areas within an organization affected or participating in the migration: Stakeholders Stakeholder groups Job functions Business services Tag categories Tags You can create and configure an instance in any order. However, the suggested order below is the most efficient for creating stakeholders and tags. Stakeholders: Create Stakeholder groups Create Job functions Create Stakeholders Tags: Create Tag categories Create Tags Stakeholders and defined by: Email Name Job function Stakeholder groups 4.6.1. Creating a new stakeholder group There are no default stakeholder groups defined. You can create a new stakeholder group by following the procedure below. Procedure In Migration view, click Controls . Click Stakeholder groups . Click Create new . Enter the following information: Name Description Member(s) Click Create . 4.6.2. Creating a new job function Migration Toolkit for Applications (MTA) uses the job function attribute to classify stakeholders and provides a list of default values that can be expanded. You can create a new job function, which is not in the default list, by following the procedure below. Procedure In Migration view, click Controls . Click Job functions . Click Create new . Enter a job function title in the Name text box. Click Create . 4.6.3. Creating a new stakeholder You can create a new migration project stakeholder by following the procedure below. Procedure In Migration view, click Controls . Click Stakeholders . Click Create new . Enter the following information: Email Name Job function - custom functions can be created Stakeholder group Click Create . 4.6.4. Creating a new business service Migration Toolkit for Applications (MTA) uses the business service attribute to specify the departments within the organization that use the application and that are affected by the migration. You can create a new business service by following the procedure below. Procedure In Migration view, click Controls . Click Business services . Click Create new . Enter the following information: Name Description Owner Click Create . 4.6.5. Creating new tag categories Migration Toolkit for Applications (MTA) uses tags in multiple categories and provides a list of default values. You can create a new tag category by following the procedure below. Procedure In Migration view, click Controls . Click Tags . Click Create tag category . Enter the following information: Name Rank - the order in which the tags appear on the applications Color Click Create . 4.6.5.1. Creating new tags You can create a new tag, which is not in the default list, by following the procedure below. Procedure In Migration view, click Controls . Click Tags . Click Create tag . Enter the following information: Name Tag category Click Create .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/user_interface_guide/configuring-the-instance-environment
Liquid Reference
Liquid Reference Red Hat 3scale API Management 2.15 Find additional information related to your 3scale API Management installation. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/liquid_reference/index
14.2. Types
14.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. Label files with the samba_share_t type to allow Samba to share them. Only label files you have created, and do not relabel system files with the samba_share_t type: Booleans can be enabled to share such files and directories. SELinux allows Samba to write to files labeled with the samba_share_t type, as long as the /etc/samba/smb.conf file and Linux permissions are set accordingly. The samba_etc_t type is used on certain files in the /etc/samba/ directory, such as smb.conf . Do not manually label files with the samba_etc_t type. If files in this directory are not labeled correctly, enter the restorecon -R -v /etc/samba command as the root user to restore such files to their default contexts. If /etc/samba/smb.conf is not labeled with the samba_etc_t type, starting the Samba service may fail and an SELinux denial message may be logged. The following is an example denial message when /etc/samba/smb.conf was labeled with the httpd_sys_content_t type:
[ "setroubleshoot: SELinux is preventing smbd (smbd_t) \"read\" to ./smb.conf (httpd_sys_content_t). For complete SELinux messages. run sealert -l deb33473-1069-482b-bb50-e4cd05ab18af" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-samba-types
Chapter 7. Troubleshooting disaster recovery
Chapter 7. Troubleshooting disaster recovery 7.1. Troubleshooting Metro-DR 7.1.1. A statefulset application stuck after failover Problem While relocating to a preferred cluster, DRPlacementControl is stuck reporting PROGRESSION as "MovingToSecondary". Previously, before Kubernetes v1.23, the Kubernetes control plane never cleaned up the PVCs created for StatefulSets. This activity was left to the cluster administrator or a software operator managing the StatefulSets. Due to this, the PVCs of the StatefulSets were left untouched when their Pods are deleted. This prevents Ramen from relocating an application to its preferred cluster. Resolution If the workload uses StatefulSets, and relocation is stuck with PROGRESSION as "MovingToSecondary", then run: For each bounded PVC for that namespace that belongs to the StatefulSet, run Once all PVCs are deleted, Volume Replication Group (VRG) transitions to secondary, and then gets deleted. Run the following command After a few seconds to a few minutes, the PROGRESSION reports "Completed" and relocation is complete. Result The workload is relocated to the preferred cluster BZ reference: [ 2118270 ] 7.1.2. DR policies protect all applications in the same namespace Problem While only a single application is selected to be used by a DR policy, all applications in the same namespace will be protected. This results in PVCs, that match the DRPlacementControl spec.pvcSelector across multiple workloads or if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Resolution Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. BZ reference: [ 2111163 ] 7.1.3. During failback of an application stuck in Relocating state Problem This issue might occur after performing failover and failback of an application (all nodes or cluster are up). When performing failback application stuck in the Relocating state with a message of Waiting for PV restore to complete. Resolution Use S3 client or equivalent to clean up the duplicate PV objects from the s3 store. Keep only the one that has a timestamp closer to the failover or relocate time. BZ reference: [ 2120201 ] 7.2. Troubleshooting Regional-DR 7.2.1. RBD mirroring scheduling is getting stopped for some images Problem There are a few common causes for RBD mirroring scheduling getting stopped for some images. After marking the applications for mirroring, for some reason, if it is not replicated, use the toolbox pod and run the following command to see which image scheduling is stopped. Resolution Restart the manager daemon on the primary cluster Disable and immediately re-enable mirroring on the affected images on the primary cluster BZ reference: [ 2067095 and 2121514 ] 7.2.2. rbd-mirror daemon health is in warning state Problem There appears to be numerous cases where WARNING gets reported if mirror service ::get_mirror_service_status calls Ceph monitor to get service status for rbd-mirror . Following a network disconnection, rbd-mirror daemon health is in the warning state while the connectivity between both the managed clusters is fine. Resolution Run the following command in the toolbox and look for leader:false If you see the following in the output: leader: false It indicates that there is a daemon startup issue and the most likely root cause could be due to problems reliably connecting to the secondary cluster. Workaround: Move the rbd-mirror pod to a different node by simply deleting the pod and verify that it has been rescheduled on another node. leader: true or no output Contact Red Hat Support . BZ reference: [ 2118627 ] 7.2.3. volsync-rsync-src pod is in error state as it is unable to resolve the destination hostname Problem VolSync source pod is unable to resolve the hostname of the VolSync destination pod. The log of the VolSync Pod consistently shows an error message over an extended period of time similar to the following log snippet. Example output Resolution Restart submariner-lighthouse-agent on both nodes.
[ "oc get pvc -n <namespace>", "oc delete pvc <pvcname> -n namespace", "oc get drpc -n <namespace> -o wide", "rbd snap ls <poolname/imagename> -all", "rbd mirror pool status --verbose ocs-storagecluster-cephblockpool | grep 'leader:'", "oc logs -n busybox-workloads-3-2 volsync-rsync-src-dd-io-pvc-1-p25rz", "VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local:22 ssh: Could not resolve hostname volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local: Name or service not known", "oc delete pod -l app=submariner-lighthouse-agent -n submariner-operator" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/troubleshooting_disaster_recovery
Chapter 3. Programmatically configuring user roles and permissions
Chapter 3. Programmatically configuring user roles and permissions Configure security authorization programmatically when using embedded caches in Java applications. 3.1. Data Grid user roles and permissions Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources. Role Permissions Description admin ALL Superuser with all permissions including control of the Cache Manager lifecycle. deployer ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE Can create and delete Data Grid resources in addition to application permissions. application ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR Has read and write access to Data Grid resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts. observer ALL_READ, MONITOR Has read access to Data Grid resources in addition to monitor permissions. monitor MONITOR Can view statistics via JMX and the metrics endpoint. Additional resources org.infinispan.security.AuthorizationPermission Enum Data Grid configuration schema reference 3.1.1. Permissions User roles are sets of permissions with different access levels. Table 3.1. Cache Manager permissions Permission Function Description CONFIGURATION defineConfiguration Defines new cache configurations. LISTEN addListener Registers listeners against a Cache Manager. LIFECYCLE stop Stops the Cache Manager. CREATE createCache , removeCache Create and remove container resources such as caches, counters, schemas, and scripts. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all Cache Manager permissions. Table 3.2. Cache permissions Permission Function Description READ get , contains Retrieves entries from a cache. WRITE put , putIfAbsent , replace , remove , evict Writes, replaces, removes, evicts data in a cache. EXEC distexec , streams Allows code execution against a cache. LISTEN addListener Registers listeners against a cache. BULK_READ keySet , values , entrySet , query Executes bulk retrieve operations. BULK_WRITE clear , putAll Executes bulk write operations. LIFECYCLE start , stop Starts and stops a cache. ADMIN getVersion , addInterceptor* , removeInterceptor , getInterceptorChain , getEvictionManager , getComponentRegistry , getDistributionManager , getAuthorizationManager , evict , getRpcManager , getCacheConfiguration , getCacheManager , getInvocationContextContainer , setAvailability , getDataContainer , getStats , getXAResource Allows access to underlying components and internal structures. MONITOR getStats Allows access to JMX statistics and the metrics endpoint. ALL - Includes all cache permissions. ALL_READ - Combines the READ and BULK_READ permissions. ALL_WRITE - Combines the WRITE and BULK_WRITE permissions. Additional resources Data Grid Security API 3.1.2. Role and permission mappers Data Grid implements users as a collection of principals. Principals represent either an individual user identity, such as a username, or a group to which the users belong. Internally, these are implemented with the javax.security.auth.Subject class. To enable authorization, the principals must be mapped to role names, which are then expanded into a set of permissions. Data Grid includes the PrincipalRoleMapper API for associating security principals to roles, and the RolePermissionMapper API for associating roles with specific permissions. Data Grid provides the following role and permission mapper implementations: Cluster role mapper Stores principal to role mappings in the cluster registry. Cluster permission mapper Stores role to permission mappings in the cluster registry. Allows you to dynamically modify user roles and permissions. Identity role mapper Uses the principal name as the role name. The type or format of the principal name depends on the source. For example, in an LDAP directory the principal name could be a Distinguished Name (DN). Common name role mapper Uses the Common Name (CN) as the role name. You can use this role mapper with an LDAP directory or with client certificates that contain Distinguished Names (DN); for example cn=managers,ou=people,dc=example,dc=com maps to the managers role. Note By default, principal-to-role mapping is only applied to principals which represent groups. It is possible to configure Data Grid to also perform the mapping for user principals by setting the authorization.group-only-mapping configuration attribute to false . 3.1.2.1. Mapping users to roles and permissions in Data Grid Consider the following user retrieved from an LDAP server, as a collection of DNs: Using the Common name role mapper , the user would be mapped to the following roles: Data Grid has the following role definitions: The user would have the following permissions: Additional resources Data Grid Security API org.infinispan.security.PrincipalRoleMapper org.infinispan.security.RolePermissionMapper org.infinispan.security.mappers.IdentityRoleMapper org.infinispan.security.mappers.CommonNameRoleMapper 3.1.3. Configuring role mappers Data Grid enables the cluster role mapper and cluster permission mapper by default. To use a different implementation for role mapping, you must configure the role mappers. Procedure Open your Data Grid configuration for editing. Declare the role mapper as part of the security authorization in the Cache Manager configuration. Save the changes to your configuration. With embedded caches you can programmatically configure role and permission mappers with the principalRoleMapper() and rolePermissionMapper() methods. Role mapper configuration XML <cache-container> <security> <authorization> <common-name-role-mapper /> </authorization> </security> </cache-container> JSON { "infinispan" : { "cache-container" : { "security" : { "authorization" : { "common-name-role-mapper": {} } } } } } YAML infinispan: cacheContainer: security: authorization: commonNameRoleMapper: ~ Additional resources Data Grid configuration schema reference 3.2. Enabling and configuring authorization for embedded caches When using embedded caches, you can configure authorization with the GlobalSecurityConfigurationBuilder and ConfigurationBuilder classes. Procedure Construct a GlobalConfigurationBuilder and enable security authorization with the security().authorization().enable() method. Specify a role mapper with the principalRoleMapper() method. If required, define custom role and permission mappings with the role() and permission() methods. GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); global.security().authorization().enable() .principalRoleMapper(new ClusterRoleMapper()) .role("myroleone").permission(AuthorizationPermission.ALL_WRITE) .role("myroletwo").permission(AuthorizationPermission.ALL_READ); Enable authorization for caches in the ConfigurationBuilder . Add all roles from the global configuration. ConfigurationBuilder config = new ConfigurationBuilder(); config.security().authorization().enable(); Explicitly define roles for a cache so that Data Grid denies access for users who do not have the role. Additional resources org.infinispan.configuration.global.GlobalSecurityConfigurationBuilder org.infinispan.configuration.cache.ConfigurationBuilder 3.3. Adding authorization roles at runtime Dynamically map roles to permissions when using security authorization with Data Grid caches. Prerequisites Configure authorization for embedded caches. Have ADMIN permissions for Data Grid. Procedure Obtain the RolePermissionMapper instance. Define new roles with the addRole() method. MutableRolePermissionMapper mapper = (MutableRolePermissionMapper) cacheManager.getCacheManagerConfiguration().security().authorization().rolePermissionMapper(); mapper.addRole(Role.newRole("myroleone", true, AuthorizationPermission.ALL_WRITE, AuthorizationPermission.LISTEN)); mapper.addRole(Role.newRole("myroletwo", true, AuthorizationPermission.READ, AuthorizationPermission.WRITE)); Additional resources org.infinispan.security.RolePermissionMapper 3.4. Executing code with secure caches When you construct a DefaultCacheManager for an embedded cache that uses security authorization, the Cache Manager returns a SecureCache that checks the security context before invoking any operations. A SecureCache also ensures that applications cannot retrieve lower-level insecure objects such as DataContainer . For this reason, you must execute code with a Data Grid user that has a role with the appropriate level of permission. Prerequisites Configure authorization for embedded caches. Procedure If necessary, retrieve the current Subject from the Data Grid context: Security.getSubject(); Wrap method calls in a PrivilegedAction to execute them with the Subject. Security.doAs(mySubject, (PrivilegedAction<String>)() -> cache.put("key", "value")); Additional resources org.infinispan.security.Security org.infinispan.security.SecureCache 3.5. Configuring the access control list (ACL) cache When you grant or deny roles to users, Data Grid stores details about which users can access your caches internally. This ACL cache improves performance for security authorization by avoiding the need for Data Grid to calculate if users have the appropriate permissions to perform read and write operations for every request. Note Whenever you grant or deny roles to users, Data Grid flushes the ACL cache to ensure it applies user permissions correctly. This means that Data Grid must recalculate cache permissions for all users each time you grant or deny roles. For best performance you should not frequently or repeatedly grant and deny roles in production environments. Procedure Open your Data Grid configuration for editing. Specify the maximum number of entries for the ACL cache with the cache-size attribute. Entries in the ACL cache have a cardinality of caches * users . You should set the maximum number of entries to a value that can hold information for all your caches and users. For example, the default size of 1000 is appropriate for deployments with up to 100 caches and 10 users. Set the timeout value, in milliseconds, with the cache-timeout attribute. If Data Grid does not access an entry in the ACL cache within the timeout period that entry is evicted. When the user subsequently attempts cache operations then Data Grid recalculates their cache permissions and adds an entry to the ACL cache. Important Specifying a value of 0 for either the cache-size or cache-timeout attribute disables the ACL cache. You should disable the ACL cache only if you disable authorization. Save the changes to your configuration. ACL cache configuration XML <infinispan> <cache-container name="acl-cache-configuration"> <security cache-size="1000" cache-timeout="300000"> <authorization/> </security> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "name" : "acl-cache-configuration", "security" : { "cache-size" : "1000", "cache-timeout" : "300000", "authorization" : {} } } } } YAML infinispan: cacheContainer: name: "acl-cache-configuration" security: cache-size: "1000" cache-timeout: "300000" authorization: ~ 3.5.1. Flushing ACL caches It is possible to flush the ACL cache using the GlobalSecurityManager MBean, accessible over JMX. Additional resources Data Grid configuration schema reference
[ "CN=myapplication,OU=applications,DC=mycompany CN=dataprocessors,OU=groups,DC=mycompany CN=finance,OU=groups,DC=mycompany", "dataprocessors finance", "dataprocessors: ALL_WRITE ALL_READ finance: LISTEN", "ALL_WRITE ALL_READ LISTEN", "<cache-container> <security> <authorization> <common-name-role-mapper /> </authorization> </security> </cache-container>", "{ \"infinispan\" : { \"cache-container\" : { \"security\" : { \"authorization\" : { \"common-name-role-mapper\": {} } } } } }", "infinispan: cacheContainer: security: authorization: commonNameRoleMapper: ~", "GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); global.security().authorization().enable() .principalRoleMapper(new ClusterRoleMapper()) .role(\"myroleone\").permission(AuthorizationPermission.ALL_WRITE) .role(\"myroletwo\").permission(AuthorizationPermission.ALL_READ);", "ConfigurationBuilder config = new ConfigurationBuilder(); config.security().authorization().enable();", "ConfigurationBuilder config = new ConfigurationBuilder(); config.security().authorization().enable().role(\"myroleone\");", "MutableRolePermissionMapper mapper = (MutableRolePermissionMapper) cacheManager.getCacheManagerConfiguration().security().authorization().rolePermissionMapper(); mapper.addRole(Role.newRole(\"myroleone\", true, AuthorizationPermission.ALL_WRITE, AuthorizationPermission.LISTEN)); mapper.addRole(Role.newRole(\"myroletwo\", true, AuthorizationPermission.READ, AuthorizationPermission.WRITE));", "Security.getSubject();", "Security.doAs(mySubject, (PrivilegedAction<String>)() -> cache.put(\"key\", \"value\"));", "<infinispan> <cache-container name=\"acl-cache-configuration\"> <security cache-size=\"1000\" cache-timeout=\"300000\"> <authorization/> </security> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"name\" : \"acl-cache-configuration\", \"security\" : { \"cache-size\" : \"1000\", \"cache-timeout\" : \"300000\", \"authorization\" : {} } } } }", "infinispan: cacheContainer: name: \"acl-cache-configuration\" security: cache-size: \"1000\" cache-timeout: \"300000\" authorization: ~" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/rbac-embedded
Chapter 50. Message Timestamp Router Action
Chapter 50. Message Timestamp Router Action Update the topic field as a function of the original topic name and the record's timestamp field. 50.1. Configuration Options The following table summarizes the configuration options available for the message-timestamp-router-action Kamelet: Property Name Description Type Default Example timestampKeys * Timestamp Keys Comma separated list of Timestamp keys. The timestamp is taken from the first found field. string timestampFormat Timestamp Format Format string for the timestamp that is compatible with java.text.SimpleDateFormat. string "yyyyMMdd" timestampKeyFormat Timestamp Keys Format Format of the timestamp keys. Possible values are 'timestamp' or any format string for the timestamp that is compatible with java.text.SimpleDateFormat. In case of 'timestamp' the field will be evaluated as milliseconds since 1970, so as a UNIX Timestamp. string "timestamp" topicFormat Topic Format Format string which can contain 'USD[topic]' and 'USD[timestamp]' as placeholders for the topic and timestamp, respectively. string "topic-USD[timestamp]" Note Fields marked with an asterisk (*) are mandatory. 50.2. Dependencies At runtime, the message-timestamp-router-action Kamelet relies upon the presence of the following dependencies: mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.0.0.fuse-800048-redhat-00001 camel:jackson camel:kamelet camel:core 50.3. Usage This section describes how you can use the message-timestamp-router-action . 50.3.1. Knative Action You can use the message-timestamp-router-action Kamelet as an intermediate step in a Knative binding. message-timestamp-router-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: message-timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: message-timestamp-router-action properties: timestampKeys: "The Timestamp Keys" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 50.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 50.3.1.2. Procedure for using the cluster CLI Save the message-timestamp-router-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f message-timestamp-router-action-binding.yaml 50.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 50.3.2. Kafka Action You can use the message-timestamp-router-action Kamelet as an intermediate step in a Kafka binding. message-timestamp-router-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: message-timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: message-timestamp-router-action properties: timestampKeys: "The Timestamp Keys" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 50.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 50.3.2.2. Procedure for using the cluster CLI Save the message-timestamp-router-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f message-timestamp-router-action-binding.yaml 50.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step message-timestamp-router-action -p "step-0.timestampKeys=The Timestamp Keys" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 50.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/message-timestamp-router-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: message-timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: message-timestamp-router-action properties: timestampKeys: \"The Timestamp Keys\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f message-timestamp-router-action-binding.yaml", "kamel bind timer-source?message=Hello --step message-timestamp-router-action -p \"step-0.timestampKeys=The Timestamp Keys\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: message-timestamp-router-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: message-timestamp-router-action properties: timestampKeys: \"The Timestamp Keys\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f message-timestamp-router-action-binding.yaml", "kamel bind timer-source?message=Hello --step message-timestamp-router-action -p \"step-0.timestampKeys=The Timestamp Keys\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/message-timestamp-router-action
Chapter 8. Managing record sets
Chapter 8. Managing record sets Red Hat OpenStack (RHOSP) DNS service (designate) stores data about zones in record sets. Record sets consist of one or more DNS resource records. You can query a zone to list its record sets in addition to adding, modifying, and deleting them. The topics included in this section are: Section 8.1, "About records and record sets in the DNS service" Section 8.2, "Creating a record set" Section 8.3, "Updating a record set" Section 8.4, "Deleting a record set" 8.1. About records and record sets in the DNS service The Domain Name System (DNS) uses resource records to store zone data within namespaces. DNS records in the Red Hat OpenStack (RHOSP) DNS service (designate) are managed using record sets. Each DNS record contains the following attributes: Name - the string that indicates its location in the DNS namespace. Type - the set of letter codes that identifies how the record is used. For example, A identifies address records and CNAME identifies canonical name records. Class - the set of letter codes that specify the namespace for the record. Typically, this is IN for internet, though other namespaces do exist. TTL - (time to live) the duration, in seconds, that the record remains valid. Rdata - the data for the record, such as an IP address for an A record or another record name for a CNAME record. Each zone namespace must contain a start of authority (SOA) record and can have an authoritative name server (NS) record and a variety of other types of records. The SOA record indicates that this name server is the best source of information about the zone. The NS record identifies the name server that is authoritative for a zone. The SOA and NS records for a zone are readable, but cannot be modified. Besides the required SOA and NS records, three of the most common record types are address (A), canonical name (CNAME), and pointer (PTR) records. A records map hostnames to IP addresses. PTR records map IP addresses to hostnames. CNAME records identify the full hostname for aliases. A record set represents one or more DNS records with the same name and type, but potentially different data. For example, a record set named web.example.com , with a type of A , that contains the data 192.0.2.1 and 192.0.2.2 might reflect two web servers hosting web.example.com located at those two IP addresses. You must create record sets within a zone. If you delete a zone that contains record sets, those record sets within the zone are also deleted. Consider this output obtained by querying the example.com zone with the openstack recordset list -c name -c type -c records example.com command: In this example, the authoritative name server for the example.com. zone is ns1.example.net. , the NS record. To verify this, you can use the BIND dig tool to query the name server for the NS record: You can also verify the A record sets: 8.2. Creating a record set By default, any user can create Red Hat OpenStack Platform DNS service (designate) record sets. Prerequisites Your project must own a zone in which you are creating a record set. Procedure Source your credentials file. Example You create record sets by using the openstack recordset create command. Record sets require a zone, name, type, and data. Example Note The trailing dot ( . ) is required when using fully qualified domain names (FQDN). If you omit the trailing dot, the zone name is duplicated in the resulting record name, for example www.example.com.example.com. . In the earlier example, a user has created a zone named example.com. . Because the record set name www is not an FQDN, the DNS service prepends it to the zone name. You can achieve the same result by using the FQDN for the record set name argument: If you want to construct a TXT record set that exceeds the maximum length for a character string (255 characters), then you must split the string into multiple, smaller strings when you create the record set. In this example, a user creates a TXT record set ( _domainkey.example.com ) that contains one string of 410 characters by specifying two strings- each less than the 255 character maximum: You can supply the --record argument multiple times to create multiple records within a record set. A typical use for multiple --record arguments is round-robin DNS. Example Verification Run the list command to verify that the record set you created exists: Example Sample output Additional resources recordset create command in the Command line interface reference recordset list command in the Command line interface reference man page for dig 8.3. Updating a record set By default, any user can update Red Hat OpenStack Platform DNS service (designate) record sets. Prerequisites Your project must own a zone in which you are updating a record set. Procedure Source your credentials file. Example You modify record sets by using the openstack recordset set command. Example In this example, a user is updating the record set web.example.com. to contain two records: Note When updating a record set you can identify it by its ID or its name. If you use its name, you must use the fully qualified domain name (FQDN). Verification Run the list command to confirm your modifications. Example Sample output Additional resources recordset create command in the Command line interface reference recordset list command in the Command line interface reference 8.4. Deleting a record set By default, any user can delete Red Hat OpenStack Platform DNS service (designate) record sets. Prerequisites Your project must own a zone in which you are deleting a record set. Procedure Source your credentials file. Example You delete record sets by using the openstack recordset delete command. Example In this example, a user is deleting the record set web.example.com. from the example.com. zone: Verification Run the list command to confirm your deletions. Example Sample output Additional resources recordset delete command in the Command line interface reference recordset list command in the Command line interface reference
[ "+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com. 16200126 | | | | 16 3599 600 8640 0 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | web.example.com. | A | 192.0.2.1 | | | | 192.0.2.2 | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+", "dig @ns1.example.net example.com. -t NS +short ns1.example.net.", "dig @ns1.example.net web.example.com. +short 192.0.2.2 192.0.2.1 dig @ns1.example.net www.example.com. +short 192.0.2.1", "source ~/overcloudrc", "openstack recordset create --type A --record 192.0.2.1 example.com. www", "openstack recordset create --type A --record 192.0.2.1 example.com. www.example.com.", "openstack recordset create --type TXT --record '\"210 characters string\" \"200 characters string\"' example.com. _domainkey", "openstack recordset create --type A --record 192.0.2.1 --record 192.0.2.2 example.com. web", "openstack recordset list -c name -c type -c records example.com.", "+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com 162001261 | | | | 6 3599 600 86400 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | web.example.com. | A | 192.0.2.1 192.0.2.2 | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+", "source ~/overcloudrc", "openstack recordset set example.com. web.example.com. --record 192.0.2.5 --record 192.0.2.6", "openstack recordset list -c name -c type -c records example.com.", "+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com 162001261 | | | | 6 3599 600 86400 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | web.example.com. | A | 192.0.2.5 192.0.2.6 | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+", "source ~/overcloudrc", "openstack recordset delete example.com. web.example.com.", "openstack recordset list -c name -c type -c records example.com.", "+------------------+------+----------------------------------------------+ | name | type | records | +------------------+------+----------------------------------------------+ | example.com. | SOA | ns1.example.net. admin.example.com 162001261 | | | | 6 3599 600 86400 3600 | | | | | | example.com. | NS | ns1.example.net. | | | | | | www.example.com. | A | 192.0.2.1 | +------------------+------+----------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_dns_as_a_service/manage-record-sets_rhosp-dnsaas
Chapter 5. DM Multipath Administration and Troubleshooting
Chapter 5. DM Multipath Administration and Troubleshooting This chapter provides information on administering DM Multipath on a running system. 5.1. Automatic Configuration File Generation with Multipath Helper You can generate a basic configuration for multipath devices on Red Hat Enterprise Linux with the Multipath Helper application. The application gives you options to create multipath configurations with custom aliases, device blacklists, and settings for the characteristics of individual multipath devices. Upon completion, the application generates an installation script that includes the configuration parameters you selected and it provides a multipath.conf configuration file for review. The Multipath Helper application can be found at https://access.redhat.com/labsinfo/multipathhelper .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/MPIO_admin-troubleshoot
Chapter 8. Dynamic provisioning
Chapter 8. Dynamic provisioning 8.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 8.2. Available dynamic provisioning plugins OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. Amazon Elastic Block Store (Amazon EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. IBM Power(R) Virtual Server Block powervs.csi.ibm.com After installation, the IBM Power(R) Virtual Server Block CSI Driver Operator and IBM Power(R) Virtual Server Block CSI Driver automatically create the required storage classes for dynamic provisioning. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 8.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 8.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 8.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 8.3.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Volume type created in Cinder. Default is empty. 3 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 4 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.4. RHOSP Manila Container Storage Interface (CSI) object definition Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. 8.3.5. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp3 , sc1 , st1 . The default is gp3 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.6. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 3 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 4 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 8.3.7. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 8.3.7.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 8.3.8. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Select either pd-standard or pd-ssd . The default is pd-standard . 8.3.9. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 For more information about using VMware vSphere CSI with OpenShift Container Platform, see the Kubernetes documentation . 8.4. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
[ "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage/dynamic-provisioning
Chapter 1. About the Multicloud Object Gateway
Chapter 1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/about-the-multicloud-object-gateway
Chapter 1. Load Balancer Add-On Overview
Chapter 1. Load Balancer Add-On Overview Note As of Red Hat Enterprise Linux 6.6, Red Hat provides support for HAProxy and keepalived in addition to the Piranha load balancing software. For information on configuring a Red Hat Enterprise Linux system with HAProxy and keepalived, see the Load Balancer Administration documentation for Red Hat Enterprise Linux 7. The Load Balancer Add-On is a set of integrated software components that provide Linux Virtual Servers (LVS) for balancing IP load across a set of real servers. The Load Balancer Add-On runs on an active LVS router as well as a backup LVS router . The active LVS router serves two roles: To balance the load across the real servers. To check the integrity of the services on each real server. The backup LVS router monitors the active LVS router and takes over from it in case the active LVS router fails. This chapter provides an overview of The Load Balancer Add-On components and functions, and consists of the following sections: Section 1.1, "A Basic Load Balancer Add-On Configuration" Section 1.2, "A Three-Tier Load Balancer Add-On Configuration" Section 1.3, "Load Balancer Add-On Scheduling Overview" Section 1.4, "Routing Methods" Section 1.5, "Persistence and Firewall Marks" Section 1.6, "Load Balancer Add-On - A Block Diagram" 1.1. A Basic Load Balancer Add-On Configuration Figure 1.1, "A Basic Load Balancer Add-On Configuration" shows a simple Load Balancer Add-On configuration consisting of two layers. On the first layer is one active and one backup LVS router. Each LVS router has two network interfaces, one interface on the Internet and one on the private network, enabling them to regulate traffic between the two networks. For this example the active router is using Network Address Translation or NAT to direct traffic from the Internet to a variable number of real servers on the second layer, which in turn provide the necessary services. Therefore, the real servers in this example are connected to a dedicated private network segment and pass all public traffic back and forth through the active LVS router. To the outside world, the servers appears as one entity. Figure 1.1. A Basic Load Balancer Add-On Configuration Service requests arriving at the LVS router are addressed to a virtual IP address, or VIP . This is a publicly-routable address the administrator of the site associates with a fully-qualified domain name, such as www.example.com, and is assigned to one or more virtual servers . A virtual server is a service configured to listen on a specific virtual IP. Refer to Section 4.6, "VIRTUAL SERVERS" for more information on configuring a virtual server using the Piranha Configuration Tool . A VIP address migrates from one LVS router to the other during a failover, thus maintaining a presence at that IP address (also known as floating IP addresses ). VIP addresses may be aliased to the same device which connects the LVS router to the Internet. For instance, if eth0 is connected to the Internet, than multiple virtual servers can be aliased to eth0:1 . Alternatively, each virtual server can be associated with a separate device per service. For example, HTTP traffic can be handled on eth0:1 , and FTP traffic can be handled on eth0:2 . Only one LVS router is active at a time. The role of the active router is to redirect service requests from virtual IP addresses to the real servers. The redirection is based on one of eight supported load-balancing algorithms described further in Section 1.3, "Load Balancer Add-On Scheduling Overview" . The active router also dynamically monitors the overall health of the specific services on the real servers through simple send/expect scripts . To aid in detecting the health of services that require dynamic data, such as HTTPS or SSL, the administrator can also call external executables. If a service on a real server malfunctions, the active router stops sending jobs to that server until it returns to normal operation. The backup router performs the role of a standby system. Periodically, the LVS router exchanges heartbeat messages through the primary external public interface and, in a failover situation, the private interface. Should the backup node fail to receive a heartbeat message within an expected interval, it initiates a failover and assumes the role of the active router. During failover, the backup router takes over the VIP addresses serviced by the failed router using a technique known as ARP spoofing - where the backup LVS router announces itself as the destination for IP packets addressed to the failed node. When the failed node returns to active service, the backup node assumes its hot-backup role again. The simple, two-layered configuration used in Figure 1.1, "A Basic Load Balancer Add-On Configuration" is best for serving data which does not change very frequently - such as static webpages - because the individual real servers do not automatically sync data between each node. 1.1.1. Data Replication and Data Sharing Between Real Servers Since there is no built-in component in Load Balancer Add-On to share the same data between the real servers, the administrator has two basic options: Synchronize the data across the real server pool Add a third layer to the topology for shared data access The first option is preferred for servers that do not allow large numbers of users to upload or change data on the real servers. If the configuration allows large numbers of users to modify data, such as an e-commerce website, adding a third layer is preferable. 1.1.1.1. Configuring Real Servers to Synchronize Data There are many ways an administrator can choose to synchronize data across the pool of real servers. For instance, shell scripts can be employed so that if a Web engineer updates a page, the page is posted to all of the servers simultaneously. Also, the system administrator can use programs such as rsync to replicate changed data across all nodes at a set interval. However, this type of data synchronization does not optimally function if the configuration is overloaded with users constantly uploading files or issuing database transactions. For a configuration with a high load, a three-tier topology is the ideal solution.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/ch-lvs-overview-vsa
32.4. Using fadump on IBM PowerPC hardware
32.4. Using fadump on IBM PowerPC hardware Starting with Red Hat Enterprise Linux 6.8 an alternative dumping mechanism to kdump , the firmware-assisted dump ( fadump ), is available. The fadump feature is supported only on IBM Power Systems. The goal of fadump is to enable the dump of a crashed system, and to do so from a fully-reset system, and to minimize the total elapsed time until the system is back in production use. The fadump feature is integrated with kdump infrastructure present in the user space to seemlessly switch between kdump and fadump mechanisms. Firmware-assisted dump ( fadump ) is a reliable alternative to kexec-kdump available on IBM PowerPC LPARS. It captures vmcore from a fully-reset system with PCI and I/O devices reinitialized. While this mechanism uses the firmware to preserve the memory in case of a crash, it reuses the kdump userspace scripts to save the vmcore" To achieve this, fadump registers the regions of memory that must be preserved in the event of a crash with the system firmware. These regions consist of all the system memory contents, except the boot memory, system registers and hardware Page Table Entries (PTEs). Note The area of memory not preserved and known as boot memory is the amount of RAM required to successfully boot the kernel after a crash event. By default, the boot memory size is 256MB or 5% of total system RAM, whichever is larger. Unlike a kexec -initiated event, the fadump process uses the production kernel to recover a crash dump. When booting after a crash, PowerPC hardware makes the device node /proc/device-tree/rtas/ibm,kernel-dump available to procfs , which the fadump-aware kdump scripts check for to save the vmcore. After this has completed, the system is rebooted cleanly. Enabling fadump Install and configure kdump as described in Section 32.1, "Installing the kdump Service" and Section 32.2, "Configuring the kdump Service" . Add fadump=on to the GRUB_CMDLINE_LINUX line in /etc/default/grub : (optional) If you want to specify reserved boot memory instead of accepting the defaults, add fadump_reserve_mem= xx M to GRUB_CMDLINE_LINUX in /etc/default/grub , where xx is the amount of the memory required in megabytes: Important As with all boot configuration options, it is strongly recommended that you test the configuration before it is needed. If you observe Out of Memory (OOM) errors when booting from the crash kernel, increase the value specified in fadump_reserve_mem= until the crash kernel can boot cleanly. Some trial and error may be required in this case.
[ "GRUB_CMDLINE_LINUX=\"rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet fadump=on \"", "GRUB_CMDLINE_LINUX=\"rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root rhgb quiet fadump=on fadump_reserve_mem= xx M \"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-ppc-fadump
20.33. Dumping Storage Volume Information to an XML File
20.33. Dumping Storage Volume Information to an XML File The virsh vol-dumpxml vol command takes the volume information, creates an XML file with the contents and outputs it to the settings that are set on the stdout stream. Optionally, you can supply the name of the associated storage pool using the --pool option. Example 20.93. How to dump the contents of a storage volume The following example dumps the contents of the storage volume named vol-new into an XML file:
[ "virsh vol-dumpxml vol-new" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-storage_volume_commands-dumping_storage_volume_information_to_an_xml_file
Chapter 4. Installing the undercloud with containers
Chapter 4. Installing the undercloud with containers This chapter provides info on how to create a container-based undercloud and keep it updated. 4.1. Configuring director The director installation process requires certain settings in the undercloud.conf configuration file, which director reads from the home directory of the stack user. Complete the following steps to copy default template as a foundation for your configuration. Procedure Copy the default template to the home directory of the stack user's: Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value. 4.2. Director configuration parameters The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors. Important At minimum, you must set the container_images_file parameter to the environment file that contains your container image configuration. Without this parameter properly set to the appropriate file, director cannot obtain your container image rule set from the ContainerImagePrepare parameter nor your container registry authentication details from the ContainerImageRegistryCredentials parameter. Defaults The following parameters are defined in the [DEFAULT] section of the undercloud.conf file: additional_architectures A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports only the x86_64 architecture. certificate_generation_ca The certmonger nickname of the CA that signs the requested certificate. Use this option only if you have set the generate_service_certificate parameter. If you select the local CA, certmonger extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds the certificate to the trust chain. clean_nodes Defines whether to wipe the hard drive between deployments and after introspection. cleanup Cleanup temporary files. Set this to False to leave the temporary files used during deployment in place after you run the deployment command. This is useful for debugging the generated files or if errors occur. container_cli The CLI tool for container management. Leave this parameter set to podman . Red Hat Enterprise Linux 9.0 only supports podman . container_healthcheck_disabled Disables containerized service health checks. Red Hat recommends that you enable health checks and leave this option set to false . container_images_file Heat environment file with container image information. This file can contain the following entries: Parameters for all required container images The ContainerImagePrepare parameter to drive the required image preparation. Usually the file that contains this parameter is named containers-prepare-parameter.yaml . container_insecure_registries A list of insecure registries for podman to use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases, podman has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite Server if the undercloud is registered to Satellite. container_registry_mirror An optional registry-mirror configured that podman uses. custom_env_files Additional environment files that you want to add to the undercloud installation. deployment_user The user who installs the undercloud. Leave this parameter unset to use the current default user stack . discovery_default_driver Sets the default driver for automatically enrolled nodes. Requires the enable_node_discovery parameter to be enabled and you must include the driver in the enabled_hardware_types list. enable_ironic; enable_ironic_inspector; enable_tempest; enable_validations Defines the core services that you want to enable for director. Leave these parameters set to true . enable_node_discovery Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the fake driver as a default but you can set discovery_default_driver to override. You can also use introspection rules to specify driver information for newly enrolled nodes. enable_routed_networks Defines whether to enable support for routed control plane networks. enabled_hardware_types A list of hardware types that you want to enable for the undercloud. generate_service_certificate Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the undercloud_service_certificate parameter. The undercloud installation saves the resulting certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem . The CA defined in the certificate_generation_ca parameter signs this certificate. heat_container_image URL for the heat container image to use. Leave unset. heat_native Run host-based undercloud configuration using heat-all . Leave as true . hieradata_override Path to hieradata override file that configures Puppet hieradata on the director, providing custom configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. For more information about using this feature, see Configuring hieradata on the undercloud . inspection_extras Defines whether to enable extra hardware collection during the inspection process. This parameter requires the python-hardware or python-hardware-detect packages on the introspection image. inspection_interface The bridge that director uses for node introspection. This is a custom bridge that the director configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default br-ctlplane . inspection_runbench Runs a set of benchmarks during node introspection. Set this parameter to true to enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. ipv6_address_mode IPv6 address configuration mode for the undercloud provisioning network. The following list contains the possible values for this parameter: dhcpv6-stateless - Address configuration using router advertisement (RA) and optional information using DHCPv6. dhcpv6-stateful - Address configuration and optional information using DHCPv6. ipxe_enabled Defines whether to use iPXE or standard PXE. The default is true , which enables iPXE. Set this parameter to false to use standard PXE. For PowerPC deployments, or for hybrid PowerPC and x86 deployments, set this value to false . local_interface The chosen interface for the director Provisioning NIC. This is also the device that director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the ip addr command. For example, this is the result of an ip addr command: In this example, the External NIC uses em0 and the Provisioning NIC uses em1 , which is currently not configured. In this case, set the local_interface to em1 . The configuration script attaches this interface to a custom bridge defined with the inspection_interface parameter. local_ip The IP address defined for the director Provisioning NIC. This is also the IP address that director uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you use a different subnet for the Provisioning network, for example, if this IP address conflicts with an existing IP address or subnet in your environment. For IPv6, the local IP address prefix length must be /64 to support both stateful and stateless connections. local_mtu The maximum transmission unit (MTU) that you want to use for the local_interface . Do not exceed 1500 for the undercloud. local_subnet The local subnet that you want to use for PXE boot and DHCP interfaces. The local_ip address should reside in this subnet. The default is ctlplane-subnet . net_config_override Path to network configuration override template. If you set this parameter, the undercloud uses a JSON or YAML format template to configure the networking with os-net-config and ignores the network parameters set in undercloud.conf . Use this parameter when you want to configure bonding or add an option to the interface. For more information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . networks_file Networks file to override for heat . output_dir Directory to output state, processed heat templates, and Ansible deployment files. overcloud_domain_name The DNS domain name that you want to use when you deploy the overcloud. Note When you configure the overcloud, you must set the CloudDomain parameter to a matching value. Set this parameter in an environment file when you configure your overcloud. roles_file The roles file that you want to use to override the default roles file for undercloud installation. It is highly recommended to leave this parameter unset so that the director installation uses the default roles file. scheduler_max_attempts The maximum number of times that the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to avoid potential race conditions when scheduling. service_principal The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA. subnets List of routed network subnets for provisioning and introspection. The default value includes only the ctlplane-subnet subnet. For more information, see Subnets . templates Heat templates file to override. undercloud_admin_host The IP address or hostname defined for director admin API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_admin_host is not in the same IP network as the local_ip , you must configure the interface on which you want the admin APIs on the undercloud to listen. By default, the admin APIs listen on the br-ctlplane interface. For information about how to configure undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_debug Sets the log level of undercloud services to DEBUG . Set this value to true to enable DEBUG log level. undercloud_enable_selinux Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to true unless you are debugging an issue. undercloud_hostname Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but you must configure all system host name settings appropriately. undercloud_log_file The path to a log file to store the undercloud install and upgrade logs. By default, the log file is install-undercloud.log in the home directory. For example, /home/stack/install-undercloud.log . undercloud_nameservers A list of DNS nameservers to use for the undercloud hostname resolution. undercloud_ntp_servers A list of network time protocol servers to help synchronize the undercloud date and time. undercloud_public_host The IP address or hostname defined for director public API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the /32 netmask. If the undercloud_public_host is not in the same IP network as the local_ip , you must set the PublicVirtualInterface parameter to the public-facing interface on which you want the public APIs on the undercloud to listen. By default, the public APIs listen on the br-ctlplane interface. Set the PublicVirtualInterface parameter in a custom environment file, and include the custom environment file in the undercloud.conf file by configuring the custom_env_files parameter. For information about customizing undercloud network interfaces, see Configuring undercloud network interfaces . undercloud_service_certificate The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate. undercloud_timezone Host timezone for the undercloud. If you do not specify a timezone, director uses the existing timezone configuration. undercloud_update_packages Defines whether to update packages during the undercloud installation. Subnets Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet , use the following sample in your undercloud.conf file: You can specify as many provisioning networks as necessary to suit your environment. Important Director cannot change the IP addresses for a subnet after director creates the subnet. cidr The network that director uses to manage overcloud instances. This is the Provisioning network, which the undercloud neutron service manages. Leave this as the default 192.168.24.0/24 unless you use a different subnet for the Provisioning network. masquerade Defines whether to masquerade the network defined in the cidr for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that the Provisioning network has external access through director. Note The director configuration also enables IP forwarding automatically using the relevant sysctl kernel parameter. dhcp_start; dhcp_end The start and end of the DHCP allocation range for overcloud nodes. Ensure that this range contains enough IP addresses to allocate to your nodes. If not specified for the subnet, director determines the allocation pools by removing the values set for the local_ip , gateway , undercloud_admin_host , undercloud_public_host , and inspection_iprange parameters from the subnets full IP range. You can configure non-contiguous allocation pools for undercloud control plane subnets by specifying a list of start and end address pairs. Alternatively, you can use the dhcp_exclude option to exclude IP addresses within an IP address range. For example, the following configurations both create allocation pools 172.20.0.100-172.20.0.150 and 172.20.0.200-172.20.0.250 : Option 1 Option 2 dhcp_exclude IP addresses to exclude in the DHCP allocation range. For example, the following configuration excludes the IP address 172.20.0.105 and the IP address range 172.20.0.210-172.20.0.219 : dns_nameservers DNS nameservers specific to the subnet. If no nameservers are defined for the subnet, the subnet uses nameservers defined in the undercloud_nameservers parameter. gateway The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default 192.168.24.1 unless you use a different IP address for director or want to use an external gateway directly. host_routes Host routes for the Neutron-managed subnet for the overcloud instances on this network. This also configures the host routes for the local_subnet on the undercloud. inspection_iprange Temporary IP range for nodes on this network to use during the inspection process. This range must not overlap with the range defined by dhcp_start and dhcp_end but must be in the same IP subnet. Modify the values for these parameters to suit your configuration. When complete, save the file. 4.3. Installing director Complete the following steps to install director and perform some basic post-installation tasks. Procedure Run the following command to install director on the undercloud: This command launches the director configuration script. Director installs additional packages and configures its services according to the configuration in the undercloud.conf . This script takes several minutes to complete. The script generates two files: /home/stack/tripleo-deploy/undercloud/tripleo-undercloud-passwords.yaml - A list of all passwords for the director services. /home/stack/stackrc - A set of initialization variables to help you access the director command line tools. The script also starts all OpenStack Platform service containers automatically. You can check the enabled containers with the following command: To initialize the stack user to use the command line tools, run the following command: The prompt now indicates that OpenStack commands authenticate and execute against the undercloud; The director installation is complete. You can now use the director command line tools. 4.4. Performing a minor update of a containerized undercloud Director provides commands to update the main packages on the undercloud node. Use director to perform a minor update within the current version of your RHOSP environment. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Update the director main packages with the dnf update command: USD sudo dnf update -y python3-tripleoclient ansible-* Update the undercloud environment: Wait until the undercloud update process completes. Reboot the undercloud to update the operating system's kernel and other system packages: Wait until the node boots.
[ "[stack@director ~]USD cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf", "2: em0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic em0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff", "[ctlplane-subnet] cidr = 192.168.24.0/24 dhcp_start = 192.168.24.5 dhcp_end = 192.168.24.24 inspection_iprange = 192.168.24.100,192.168.24.120 gateway = 192.168.24.1 masquerade = true", "dhcp_start = 172.20.0.100,172.20.0.200 dhcp_end = 172.20.0.150,172.20.0.250", "dhcp_start = 172.20.0.100 dhcp_end = 172.20.0.250 dhcp_exclude = 172.20.0.151-172.20.0.199", "dhcp_exclude = 172.20.0.105,172.20.0.210-172.20.0.219", "[stack@director ~]USD openstack undercloud install", "[stack@director ~]USD sudo podman ps", "[stack@director ~]USD source ~/stackrc", "(undercloud) [stack@director ~]USD", "source ~/stackrc", "sudo dnf update -y python3-tripleoclient ansible-*", "openstack undercloud upgrade", "sudo reboot" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/transitioning_to_containerized_services/assembly_installing-the-undercloud-with-containers
Chapter 1. Red Hat Storage on Public Cloud
Chapter 1. Red Hat Storage on Public Cloud Red Hat Gluster Storage for Public Cloud packages glusterFS for deploying scalable NAS in the public cloud. This powerful storage server provides all the features of On-Premise deployment, within a highly available, scalable, virtualized, and centrally managed pool of NAS storage hosted off-premise. Additionally, Red Hat Gluster Storage can be deployed in the public cloud using Red Hat Gluster Storage for Public Cloud, for example, within the Amazon Web Services (AWS) cloud. It delivers all the features and functionality possible in a private cloud or datacenter to the public cloud by providing massively scalable and high available NAS in the cloud. The POSIX compatible glusterFS servers, which use XFS file system format to store data on disks, can be accessed using industry-standard access protocols including Network File System (NFS) and Server Message Block (SMB) (also known as CIFS). 1.1. About glusterFS glusterFS aggregates various storage servers over network interconnects into one large parallel network file system. Based on a stackable user space design, it delivers exceptional performance for diverse workloads and is a key building block of Red Hat Gluster Storage.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/chap-documentation-deployment_guide_for_public_cloud-platform_introduction
Chapter 5. Clair security scanner
Chapter 5. Clair security scanner 5.1. Clair configuration overview Clair is configured by a structured YAML file. Each Clair node needs to specify what mode it will run in and a path to a configuration file through CLI flags or environment variables. For example: USD clair -conf ./path/to/config.yaml -mode indexer or USD clair -conf ./path/to/config.yaml -mode matcher The aforementioned commands each start two Clair nodes using the same configuration file. One runs the indexing facilities, while other runs the matching facilities. If you are running Clair in combo mode, you must supply the indexer, matcher, and notifier configuration blocks in the configuration. 5.1.1. Information about using Clair in a proxy environment Environment variables respected by the Go standard library can be specified if needed, for example: HTTP_PROXY USD export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port> HTTPS_PROXY . USD export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port> SSL_CERT_DIR USD export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates> NO_PROXY USD export NO_PROXY=<comma_separated_list_of_hosts_and_domains> If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. For example, the osv updater requires access to https://osv-vulnerabilities.storage.googleapis.com to fetch ecosystem data dumps. In this scenario, the URL must be added to the proxy allowlist. For a full list of updater URLs, see "Clair updater URLs". You must also ensure that the standard Clair URLs are added to the proxy allowlist: https://search.maven.org/solrsearch/select https://catalog.redhat.com/api/containers/ https://access.redhat.com/security/data/metrics/repository-to-cpe.json https://access.redhat.com/security/data/metrics/container-name-repos-map.json When configuring the proxy server, take into account any authentication requirements or specific proxy settings needed to enable seamless communication between Clair and these URLs. By thoroughly documenting and addressing these considerations, you can ensure that Clair functions effectively while routing its updater traffic through the proxy. 5.1.2. Clair configuration reference The following YAML shows an example Clair configuration: http_listen_addr: "" introspection_addr: "" log_level: "" tls: {} indexer: connstring: "" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: "" indexer_addr: "" migrations: false period: "" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: "" migrations: false indexer_addr: "" matcher_addr: "" poll_interval: "" delivery_interval: "" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: "" probability: null jaeger: agent: endpoint: "" collector: endpoint: "" username: null password: null service_name: "" tags: nil buffer_max: 0 metrics: name: "" prometheus: endpoint: null dogstatsd: url: "" Note The above YAML file lists every key for completeness. Using this configuration file as-is will result in some options not having their defaults set normally. 5.1.3. Clair general fields The following table describes the general configuration fields available for a Clair deployment. Field Typhttp_listen_ae Description http_listen_addr String Configures where the HTTP API is exposed. Default: :6060 introspection_addr String Configures where Clair's metrics and health endpoints are exposed. log_level String Sets the logging level. Requires one of the following strings: debug-color , debug , info , warn , error , fatal , panic tls String A map containing the configuration for serving the HTTP API of TLS/SSL and HTTP/2. .cert String The TLS certificate to be used. Must be a full-chain certificate. Example configuration for general Clair fields The following example shows a Clair configuration. Example configuration for general Clair fields # ... http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info # ... 5.1.4. Clair indexer configuration fields The following table describes the configuration fields for Clair's indexer component. Field Type Description indexer Object Provides Clair indexer node configuration. .airgap Boolean Disables HTTP access to the internet for indexers and fetchers. Private IPv4 and IPv6 addresses are allowed. Database connections are unaffected. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .index_report_request_concurrency Integer Rate limits the number of index report creation requests. Setting this to 0 attemps to auto-size this value. Setting a negative value means unlimited. The auto-sizing is a multiple of the number of available cores. The API returns a 429 status code if concurrency is exceeded. .scanlock_retry Integer A positive integer representing seconds. Concurrent indexers lock on manifest scans to avoid clobbering. This value tunes how often a waiting indexer polls for the lock. .layer_scan_concurrency Integer Positive integer limiting the number of concurrent layer scans. Indexers will match a manifest's layer concurrently. This value tunes the number of layers an indexer scans in parallel. .migrations Boolean Whether indexer nodes handle migrations to their database. .scanner String Indexer configuration. Scanner allows for passing configuration options to layer scanners. The scanner will have this configuration pass to it on construction if designed to do so. .scanner.dist String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.package String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.repo String A map with the name of a particular scanner and arbitrary YAML as a value. Example indexer configuration The following example shows a hypothetical indexer configuration for Clair. Example indexer configuration # ... indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true # ... 5.1.5. Clair matcher configuration fields The following table describes the configuration fields for Clair's matcher component. Note Differs from matchers configuration fields. Field Type Description matcher Object Provides Clair matcher node configuration. .cache_age String Controls how long users should be hinted to cache responses for. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .max_conn_pool Integer Limits the database connection pool size. Clair allows for a custom connection pool size. This number directly sets how many active database connections are allowed concurrently. This parameter will be ignored in a future version. Users should configure this through the connection string. .indexer_addr String A matcher contacts an indexer to create a vulnerability report. The location of this indexer is required. Defaults to 30m . .migrations Boolean Whether matcher nodes handle migrations to their databases. .period String Determines how often updates for new security advisories take place. Defaults to 30m . .disable_updaters Boolean Whether to run background updates or not. Default: False .update_retention Integer Sets the number of update operations to retain between garbage collection cycles. This should be set to a safe MAX value based on database size constraints. Defaults to 10m . If a value of less than 0 is provided, garbage collection is disabled. 2 is the minimum value to ensure updates can be compared to notifications. Example matcher configuration Example matcher configuration # ... matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2 # ... 5.1.6. Clair matchers configuration fields The following table describes the configuration fields for Clair's matchers component. Note Differs from matcher configuration fields. Table 5.1. Matchers configuration fields Field Type Description matchers Array of strings Provides configuration for the in-tree matchers . .names String A list of string values informing the matcher factory about enabled matchers. If value is set to null , the default list of matchers run. The following strings are accepted: alpine-matcher , aws-matcher , debian-matcher , gobin , java-maven , oracle , photon , python , rhel , rhel-container-matcher , ruby , suse , ubuntu-matcher .config String Provides configuration to a specific matcher. A map keyed by the name of the matcher containing a sub-object which will be provided to the matchers factory constructor. For example: Example matchers configuration The following example shows a hypothetical Clair deployment that only requires only the alpine , aws , debian , oracle matchers. Example matchers configuration # ... matchers: names: - "alpine-matcher" - "aws" - "debian" - "oracle" # ... 5.1.7. Clair updaters configuration fields The following table describes the configuration fields for Clair's updaters component. Table 5.2. Updaters configuration fields Field Type Description updaters Object Provides configuration for the matcher's update manager. .sets String A list of values informing the update manager which updaters to run. If value is set to null , the default set of updaters runs the following: alpine , aws , clair.cvss , debian , oracle , photon , osv , rhel , rhcc suse , ubuntu If left blank, zero updaters run. .config String Provides configuration to specific updater sets. A map keyed by the name of the updater set containing a sub-object which will be provided to the updater set's constructor. For a list of the sub-objects for each updater, see "Advanced updater configuration". Example updaters configuration In the following configuration, only the rhel set is configured. The ignore_unpatched variable, which is specific to the rhel updater, is also defined. Example updaters configuration # ... updaters: sets: - rhel config: rhel: ignore_unpatched: false # ... 5.1.8. Clair notifier configuration fields The general notifier configuration fields for Clair are listed below. Field Type Description notifier Object Provides Clair notifier node configuration. .connstring String Postgres connection string. Accepts format as URL, or libpq connection string. .migrations Boolean Whether notifier nodes handle migrations to their database. .indexer_addr String A notifier contacts an indexer to create or obtain manifests affected by vulnerabilities. The location of this indexer is required. .matcher_addr String A notifier contacts a matcher to list update operations and acquire diffs. The location of this matcher is required. .poll_interval String The frequency at which the notifier will query a matcher for update operations. .delivery_interval String The frequency at which the notifier attempts delivery of created, or previously failed, notifications. .disable_summary Boolean Controls whether notifications should be summarized to one per manifest. Example notifier configuration The following notifier snippet is for a minimal configuration. Example notifier configuration # ... notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" headers: "" amqp: null stomp: null # ... 5.1.8.1. Clair webhook configuration fields The following webhook fields are available for the Clair notifier environment. Table 5.3. Clair webhook fields .webhook Object Configures the notifier for webhook delivery. .webhook.target String URL where the webhook will be delivered. .webhook.callback String The callback URL where notifications can be retrieved. The notification ID will be appended to this URL. This will typically be where the Clair notifier is hosted. .webhook.headers String A map associating a header name to a list of values. Example webhook configuration Example webhook configuration # ... notifier: # ... webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" # ... 5.1.8.2. Clair amqp configuration fields The following Advanced Message Queuing Protocol (AMQP) fields are available for the Clair notifier environment. .amqp Object Configures the notifier for AMQP delivery. [NOTE] ==== Clair does not declare any AMQP components on its own. All attempts to use an exchange or queue are passive only and will fail. Broker administrators should setup exchanges and queues ahead of time. ==== .amqp.direct Boolean If true , the notifier will deliver individual notifications (not a callback) to the configured AMQP broker. .amqp.rollup Integer When amqp.direct is set to true , this value informs the notifier of how many notifications to send in a direct delivery. For example, if direct is set to true , and amqp.rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .amqp.exchange Object The AMQP exchange to connect to. .amqp.exchange.name String The name of the exchange to connect to. .amqp.exchange.type String The type of the exchange. Typically one of the following: direct , fanout , topic , headers . .amqp.exchange.durability Boolean Whether the configured queue is durable. .amqp.exchange.auto_delete Boolean Whether the configured queue uses an auto_delete_policy . .amqp.routing_key String The name of the routing key each notification is sent with. .amqp.callback String If amqp.direct is set to false , this URL is provided in the notification callback sent to the broker. This URL should point to Clair's notification API endpoint. .amqp.uris String A list of one or more AMQP brokers to connect to, in priority order. .amqp.tls Object Configures TLS/SSL connection to an AMQP broker. .amqp.tls.root_ca String The filesystem path where a root CA can be read. .amqp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. [NOTE] ==== Clair also allows SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .amqp.tls.key String The filesystem path where a TLS/SSL private key can be read. Example AMQP configuration The following example shows a hypothetical AMQP configuration for Clair. Example AMQP configuration # ... notifier: # ... amqp: exchange: name: "" type: "direct" durable: true auto_delete: false uris: ["amqp://user:pass@host:10000/vhost"] direct: false routing_key: "notifications" callback: "http://clair-notifier/notifier/api/v1/notifications" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 5.1.8.3. Clair STOMP configuration fields The following Simple Text Oriented Message Protocol (STOMP) fields are available for the Clair notifier environment. .stomp Object Configures the notifier for STOMP delivery. .stomp.direct Boolean If true , the notifier delivers individual notifications (not a callback) to the configured STOMP broker. .stomp.rollup Integer If stomp.direct is set to true , this value limits the number of notifications sent in a single direct delivery. For example, if direct is set to true , and rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .stomp.callback String If stomp.callback is set to false , the provided URL in the notification callback is sent to the broker. This URL should point to Clair's notification API endpoint. .stomp.destination String The STOMP destination to deliver notifications to. .stomp.uris String A list of one or more STOMP brokers to connect to in priority order. .stomp.tls Object Configured TLS/SSL connection to STOMP broker. .stomp.tls.root_ca String The filesystem path where a root CA can be read. [NOTE] ==== Clair also respects SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .stomp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. .stomp.tls.key String The filesystem path where a TLS/SSL private key can be read. .stomp.user String Configures login details for the STOMP broker. .stomp.user.login String The STOMP login to connect with. .stomp.user.passcode String The STOMP passcode to connect with. Example STOMP configuration The following example shows a hypothetical STOMP configuration for Clair. Example STOMP configuration # ... notifier: # ... stomp: desitnation: "notifications" direct: false callback: "http://clair-notifier/notifier/api/v1/notifications" login: login: "username" passcode: "passcode" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 5.1.9. Clair authorization configuration fields The following authorization configuration fields are available for Clair. Field Type Description auth Object Defines Clair's external and intra-service JWT based authentication. If multiple auth mechanisms are defined, Clair picks one. Currently, multiple mechanisms are unsupported. .psk String Defines pre-shared key authentication. .psk.key String A shared base64 encoded key distributed between all parties signing and verifying JWTs. .psk.iss String A list of JWT issuers to verify. An empty list accepts any issuer in a JWT claim. Example authorization configuration The following authorization snippet is for a minimal configuration. Example authorization configuration # ... auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: ["quay"] # ... 5.1.10. Clair trace configuration fields The following trace configuration fields are available for Clair. Field Type Description trace Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the application traces will belong to. .probability Integer The probability a trace will occur. .jaeger Object Defines values for Jaeger tracing. .jaeger.agent Object Defines values for configuring delivery to a Jaeger agent. .jaeger.agent.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector Object Defines values for configuring delivery to a Jaeger collector. .jaeger.collector.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector.username String A Jaeger username. .jaeger.collector.password String A Jaeger password. .jaeger.service_name String The service name registered in Jaeger. .jaeger.tags String Key-value pairs to provide additional metadata. .jaeger.buffer_max Integer The maximum number of spans that can be buffered in memory before they are sent to the Jaeger backend for storage and analysis. Example trace configuration The following example shows a hypothetical trace configuration for Clair. Example trace configuration # ... trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" # ... 5.1.11. Clair metrics configuration fields The following metrics configuration fields are available for Clair. Field Type Description metrics Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the metrics in use. .prometheus String Configuration for a Prometheus metrics exporter. .prometheus.endpoint String Defines the path where metrics are served. Example metrics configuration The following example shows a hypothetical metrics configuration for Clair. Example metrics configuration # ... metrics: name: "prometheus" prometheus: endpoint: "/metricsz" # ...
[ "clair -conf ./path/to/config.yaml -mode indexer", "clair -conf ./path/to/config.yaml -mode matcher", "export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>", "export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>", "export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>", "export NO_PROXY=<comma_separated_list_of_hosts_and_domains>", "http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"", "http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info", "indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true", "matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2", "matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"", "updaters: sets: - rhel config: rhel: ignore_unpatched: false", "notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null", "notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"", "notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]", "trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"", "metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/configure_red_hat_quay/clair-vulnerability-scanner
Chapter 88. Enabling AD users to administer IdM
Chapter 88. Enabling AD users to administer IdM 88.1. ID overrides for AD users In Red Hat Enterprise Linux (RHEL) 7, external group membership allows Active Directory (AD) users and groups to access Identity Management (IdM) resources in a POSIX environment with the help of the System Security Services Daemon (SSSD). The IdM LDAP server has its own mechanisms to grant access control. RHEL 8 introduces an update that allows adding an ID user override for an AD user as a member of an IdM group. An ID override is a record describing what a specific Active Directory user or group properties should look like within a specific ID view, in this case the Default Trust View . As a consequence of the update, the IdM LDAP server is able to apply access control rules for the IdM group to the AD user. AD users are now able to use the self service features of IdM UI, for example to upload their SSH keys, or change their personal data. An AD administrator is able to fully administer IdM without having two different accounts and passwords. Note Currently, selected features in IdM may still be unavailable to AD users. For example, setting passwords for IdM users as an AD user from the IdM admins group might fail. Important Do not use ID overrides of AD users for sudo rules in IdM. ID overrides of AD users represent only POSIX attributes of AD users, not AD users themselves. Additional resources Using ID views for Active Directory users 88.2. Using ID overrides to enable AD users to administer IdM Follow this procedure to create and use an ID override for an AD user to give that user rights identical to those of an IdM user. During this procedure, work on an IdM server that is configured as a trust controller or a trust agent. Prerequisites The idm:DL1 stream is enabled on your Identity Management (IdM) server and you have switched to the RPMs delivered through this stream: The idm:DL1/adtrust profile is installed on your IdM server. The profile contains all the packages necessary for installing an IdM server that will have a trust agreement with Active Directory (AD). A working IdM environment is set up. For details, see Installing Identity Management . A working trust between your IdM environment and AD is set up. Procedure As an IdM administrator, create an ID override for an AD user in the Default Trust View . For example, to create an ID override for the user [email protected] : Add the ID override from the Default Trust View as a member of an IdM group. This must be a non-POSIX group, as it interacts with Active Directory. If the group in question is a member of an IdM role, the AD user represented by the ID override gains all permissions granted by the role when using the IdM API, including both the command-line interface and the IdM web UI. For example, to add the ID override for the [email protected] user to the IdM admins group: Alternatively, you can add the ID override to a role, such as the User Administrator role: Additional resources Using ID views for Active Directory users 88.3. Using Ansible to enable AD users to administer IdM Follow this procedure to use an Ansible playbook to ensure that a user ID override is present in an Identity Management (IdM) group. The user ID override is the override of an Active Directory (AD) user that you created in the Default Trust View after you established a trust with AD. As a result of running the playbook, an AD user, for example an AD administrator, is able to fully administer IdM without having two different accounts and passwords. Prerequisites You know the IdM admin password. You have installed a trust with AD . The user ID override of the AD user already exists in IdM. If it does not, create it with the ipa idoverrideuser-add 'default trust view' [email protected] command. The group to which you are adding the user ID override already exists in IdM . You are using the 4.8.7 version of IdM or later. To view the version of IdM you have installed on your server, enter ipa --version . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create an add-useridoverride-to-group.yml playbook with the following content: In the example: Secret123 is the IdM admin password. admins is the name of the IdM POSIX group to which you are adding the [email protected] ID override. Members of this group have full administrator privileges. [email protected] is the user ID override of an AD administrator. The user is stored in the AD domain with which a trust has been established. Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources ID overrides for AD users /usr/share/doc/ansible-freeipa/README-group.md /usr/share/doc/ansible-freeipa/playbooks/user Using ID views in Active Directory environments 88.4. Verifying that an AD user can perform correct commands in the IdM CLI This procedure checks that an Active Directory (AD) user can log into Identity Management (IdM) command-line interface (CLI) and run commands appropriate for his role. Destroy the current Kerberos ticket of the IdM administrator: Note The destruction of the Kerberos ticket is required because the GSSAPI implementation in MIT Kerberos chooses credentials from the realm of the target service by preference, which in this case is the IdM realm. This means that if a credentials cache collection, namely the KCM: , KEYRING: , or DIR: type of credentials cache is in use, a previously obtained admin or any other IdM principal's credentials will be used to access the IdM API instead of the AD user's credentials. Obtain the Kerberos credentials of the AD user for whom an ID override has been created: Test that the ID override of the AD user enjoys the same privileges stemming from membership in the IdM group as any IdM user in that group. If the ID override of the AD user has been added to the admins group, the AD user can, for example, create groups in IdM: 88.5. Using Ansible to enable an AD user to administer IdM You can use the ansible-freeipa idoverrideuser and group modules to create a user ID override for an Active Directory (AD) user from a trusted AD domain and give that user rights identical to those of an IdM user. The procedure uses the example of the Default Trust View ID view to which the [email protected] ID override is added in the first playbook task. In the playbook task, the [email protected] ID override is added to the IdM admins group as a member. As a result, an AD administrator can administer IdM without having two different accounts and passwords. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 8.10 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The AD forest is in trust with IdM. In the example, the name of the AD domain is addomain.com and the fully-qualified domain name (FQDN) of the AD administrator is [email protected] . The ipaserver host in the inventory file is configured as a trust controller or a trust agent. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure On your Ansible control node, create an enable-ad-admin-to-administer-idm.yml playbook with a task to add the [email protected] user override to the Default Trust View: Use another playbook task in the same playbook to add the AD administrator user ID override to the admins group: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in to the IdM client as the AD Administrator: Verify that you have obtained a valid ticket-granting ticket (TGT): Verify your admin privileges in IdM: Additional resources The idoverrideuser and ipagroup ansible-freeipa upstream documentation Enabling AD users to administer IdM
[ "yum module enable idm:DL1 yum distro-sync", "yum module install idm:DL1/adtrust", "kinit admin ipa idoverrideuser-add 'default trust view' [email protected]", "ipa group-add-member admins [email protected]", "ipa role-add-member 'User Administrator' [email protected]", "cd ~/ MyPlaybooks /", "--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]", "ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml", "kdestroy -A", "kinit [email protected] Password for [email protected]:", "ipa group-add some-new-group ---------------------------- Added group \"some-new-group\" ---------------------------- Group name: some-new-group GID: 1997000011", "--- - name: Enable AD administrator to act as a FreeIPA admin hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idoverride for [email protected] in 'default trust view' ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]", "- name: Add the AD administrator as a member of admins ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]", "ansible-playbook --vault-password-file=password_file -v -i inventory enable-ad-admin-to-administer-idm.yml", "ssh [email protected]@client.idm.example.com", "klist Ticket cache: KCM:325600500:99540 Default principal: [email protected] Valid starting Expires Service principal 02/04/2024 11:54:16 02/04/2024 21:54:16 krbtgt/[email protected] renew until 02/05/2024 11:54:16", "ipa user-add testuser --first=test --last=user ------------------------ Added user \"tuser\" ------------------------ User login: tuser First name: test Last name: user Full name: test user [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/enabling-ad-users-to-administer-idm_configuring-and-managing-idm
Chapter 61. Next steps
Chapter 61. steps Testing a decision service using test scenarios Packaging and deploying an Red Hat Decision Manager project
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/next_steps_5
Appendix A. Template Writing Reference
Appendix A. Template Writing Reference Embedded Ruby (ERB) is a tool for generating text files based on templates that combine plain text with Ruby code. Red Hat Satellite uses ERB syntax in the following cases: Provisioning templates For more information, see Creating Provisioning Templates in the Provisioning Guide . Remote execution job templates For more information, see Chapter 12, Configuring and Setting Up Remote Jobs . Report templates For more information, see Chapter 10, Using Report Templates to Monitor Hosts . Templates for partition tables For more information, see Creating Partition Tables in the Provisioning Guide . Smart Class Parameters For more information, see Configuring Puppet Smart Class Parameters in Managing Configurations Using Puppet Integration in Red Hat Satellite . This section provides an overview of Satellite-specific macros and variables that can be used in ERB templates along with some usage examples. Note that the default templates provided by Red Hat Satellite ( Hosts > Provisioning templates , Hosts > Job templates , Monitor > Report Templates ) also provide a good source of ERB syntax examples. When provisioning a host or running a remote job, the code in the ERB is executed and the variables are replaced with the host specific values. This process is referred to as rendering . Satellite Server has the safemode rendering option enabled by default, which prevents any harmful code being executed from templates. A.1. Accessing the Template Writing Reference in the Satellite web UI You can access the template writing reference document in the Satellite web UI. Procedure Log in to the Satellite web UI. In the Satellite web UI, navigate to Administer > About . Click the Templates DSL link in the Support section. A.2. Writing ERB Templates The following tags are the most important and commonly used in ERB templates: <% %> All Ruby code is enclosed within <% %> in an ERB template. The code is executed when the template is rendered. It can contain Ruby control flow structures as well as Satellite-specific macros and variables. For example: Note that this template silently performs an action with a service and returns nothing at the output. <%= %> This provides the same functionality as <% %> but when the template is executed, the code output is inserted into the template. This is useful for variable substitution, for example: Example input: Example rendering: Example input: Example rendering: Note that if you enter an incorrect variable, no output is returned. However, if you try to call a method on an incorrect variable, the following error message returns: Example input: Example rendering: <% -%>, <%= -%> By default, a newline character is inserted after a Ruby block if it is closed at the end of a line: Example input: Example rendering: To change the default behavior, modify the enclosing mark with -%> : Example input: Example rendering: This is used to reduce the number of lines, where Ruby syntax permits, in rendered templates. White spaces in ERB tags are ignored. An example of how this would be used in a report template to remove unnecessary newlines between a FQDN and IP address: Example input: Example rendering: <%# %> Encloses a comment that is ignored during template rendering: Example input: This generates no output. Indentation in ERB templates Because of the varying lengths of the ERB tags, indenting the ERB syntax might seem messy. ERB syntax ignore white space. One method of handling the indentation is to declare the ERB tag at the beginning of each new line and then use white space within the ERB tag to outline the relationships within the syntax, for example: A.3. Troubleshooting ERB Templates The Satellite web UI provides two ways to verify the template rendering for a specific host: Directly in the template editor - when editing a template (under Hosts > Partition tables , Hosts > Provisioning templates , or Hosts > Job templates ), on the Template tab click Preview and select a host from the list. The template then renders in the text field using the selected host's parameters. Preview failures can help to identify issues in your template. At the host's details page - select a host at Hosts > All hosts and click the Templates tab to list templates associated with the host. Select Review from the list to the selected template to view it's rendered version. A.4. Generic Satellite-Specific Macros This section lists Satellite-specific macros for ERB templates. You can use the macros listed in the following table across all kinds of templates. Table A.1. Generic Macros Name Description indent(n) Indents the block of code by n spaces, useful when using a snippet template that is not indented. foreman_url(kind) Returns the full URL to host-rendered templates of the given kind. For example, templates of the "provision" type usually reside at http://HOST/unattended/provision . snippet(name) Renders the specified snippet template. Useful for nesting provisioning templates. snippets(file) Renders the specified snippet found in the Foreman database, attempts to load it from the unattended/snippets/ directory if it is not found in the database. snippet_if_exists(name) Renders the specified snippet, skips if no snippet with the specified name is found. A.5. Templates Macros If you want to write custom templates, you can use some of the following macros. Depending on the template type, some of the following macros have different requirements. For more information about the available macros for report templates, in the Satellite web UI, navigate to Monitor > Report Templates , and click Create Template . In the Create Template window, click the Help tab. For more information about the available macros for job templates, in the Satellite web UI, navigate to Hosts > Job Templates , and click the New Job Template . In the New Job Template window, click the Help tab. input Using the input macro, you can customize the input data that the template can work with. You can define the input name, type, and the options that are available for users. For report templates, you can only use user inputs. When you define a new input and save the template, you can then reference the input in the ERB syntax of the template body. This loads the value from user input cpus . load_hosts Using the load_hosts macro, you can generate a complete list of hosts. Use the load_hosts macro with the each_record macro to load records in batches of 1000 to reduce memory consumption. If you want to filter the list of hosts for the report, you can add the option search: input(' Example_Host ') : In this example, you first create an input that you then use to refine the search criteria that the load_hosts macro retrieves. report_row Using the report_row macro, you can create a formatted report for ease of analysis. The report_row macro requires the report_render macro to generate the output. Example input: Example rendering: You can add extra columns to the report by adding another header. The following example adds IP addresses to the report: Example input: Example rendering: report_render This macro is available only for report templates. Using the report_render macro, you create the output for the report. During the template rendering process, you can select the format that you want for the report. YAML, JSON, HTML, and CSV formats are supported. render_template() This macro is available only for job templates. Using this macro, you can render a specific template. You can also enable and define arguments that you want to pass to the template. truthy Using the truthy macro, you can declare if the value passed is true or false, regardless of whether the value is an integer or boolean or string. This macro helps to avoid confusion when your template contains multiple value types. For example, the boolean value true is not the same as the string value "true" . With this macro, you can declare how you want the template to interpret the value and avoid confusion. You can use truthy to declare values as follows: falsy The falsy macro serves the same purpose as the truthy macro. Using the falsy macro, you can declare if the value passed in is true or false, regardless of whether the value is an integer or boolean or string. You can use falsy to declare values as follows: A.6. Host-Specific Variables The following variables enable using host data within templates. Note that job templates accept only @host variables. Table A.2. Host Specific Variables and Macros Name Description @host.architecture The architecture of the host. @host.bond_interfaces Returns an array of all bonded interfaces. See Section A.9, "Parsing Arrays" . @host.capabilities The method of system provisioning, can be either build (for example kickstart) or image. @host.certname The SSL certificate name of the host. @host.diskLayout The disk layout of the host. Can be inherited from the operating system. @host.domain The domain of the host. @host.environment Deprecated Use the host_puppet_environment variable instead. The Puppet environment of the host. @host.facts Returns a Ruby hash of facts from Facter. For example to access the 'ipaddress' fact from the output, specify @host.facts['ipaddress']. @host.grub_pass Returns the host's bootloader password. @host.hostgroup The host group of the host. host_enc['parameters'] Returns a Ruby hash containing information on host parameters. For example, use host_enc['parameters']['lifecycle_environment'] to get the life cycle environment of a host. @host.image_build? Returns true if the host is provisioned using an image. @host.interfaces Contains an array of all available host interfaces including the primary interface. See Section A.9, "Parsing Arrays" . @host.interfaces_with_identifier('IDs') Returns array of interfaces with given identifier. You can pass an array of multiple identifiers as an input, for example @host.interfaces_with_identifier(['eth0', 'eth1']). See Section A.9, "Parsing Arrays" . @host.ip The IP address of the host. @host.location The location of the host. @host.mac The MAC address of the host. @host.managed_interfaces Returns an array of managed interfaces (excluding BMC and bonded interfaces). See Section A.9, "Parsing Arrays" . @host.medium The assigned operating system installation medium. @host.name The full name of the host. @host.operatingsystem.family The operating system family. @host.operatingsystem.major The major version number of the assigned operating system. @host.operatingsystem.minor The minor version number of the assigned operating system. @host.operatingsystem.name The assigned operating system name. @host.operatingsystem.boot_files_uri(medium_provider) Full path to the kernel and initrd, returns an array. @host.os.medium_uri(@host) The URI used for provisioning (path configured in installation media). host_param('parameter_name') Returns the value of the specified host parameter. host_param_false?('parameter_name') Returns false if the specified host parameter evaluates to false. host_param_true?('parameter_name') Returns true if the specified host parameter evaluates to true. @host.primary_interface Returns the primary interface of the host. @host.provider The compute resource provider. @host.provision_interface Returns the provisioning interface of the host. Returns an interface object. @host.ptable The partition table name. @host.puppet_ca_server Deprecated Use the host_puppet_ca_server variable instead. The Puppet CA server the host must use. @host.puppetmaster Deprecated Use the host_puppet_server variable instead. The Puppet server the host must use. @host.pxe_build? Returns true if the host is provisioned using the network or PXE. @host.shortname The short name of the host. @host.sp_ip The IP address of the BMC interface. @host.sp_mac The MAC address of the BMC interface. @host.sp_name The name of the BMC interface. @host.sp_subnet The subnet of the BMC network. @host.subnet.dhcp Returns true if a DHCP proxy is configured for this host. @host.subnet.dns_primary The primary DNS server of the host. @host.subnet.dns_secondary The secondary DNS server of the host. @host.subnet.gateway The gateway of the host. @host.subnet.mask The subnet mask of the host. @host.url_for_boot(:initrd) Full path to the initrd image associated with this host. Not recommended, as it does not interpolate variables. @host.url_for_boot(:kernel) Full path to the kernel associated with this host. Not recommended, as it does not interpolate variables, prefer boot_files_uri. @provisioning_type Equals to 'host' or 'hostgroup' depending on type of provisioning. @static Returns true if the network configuration is static. @template_name Name of the template being rendered. grub_pass Returns a bootloader argument to set the encrypted bootloader password, such as --md5pass=#{@host.grub_pass} . ks_console Returns a string assembled using the port and the baud rate of the host which can be added to a kernel line. For example console=ttyS1,9600 . root_pass Returns the root password configured for the system. The majority of common Ruby methods can be applied on host-specific variables. For example, to extract the last segment of the host's IP address, you can use: A.7. Kickstart-Specific Variables The following variables are designed to be used within kickstart provisioning templates. Table A.3. Kickstart Specific Variables Name Description @arch The host architecture name, same as @host.architecture.name. @dynamic Returns true if the partition table being used is a %pre script (has the #Dynamic option as the first line of the table). @epel A command which will automatically install the correct version of the epel-release rpm. Use in a %post script. @mediapath The full kickstart line to provide the URL command. @osver The operating system major version number, same as @host.operatingsystem.major. A.8. Conditional Statements In your templates, you might perform different actions depending on which value exists. To achieve this, you can use conditional statements in your ERB syntax. In the following example, the ERB syntax searches for a specific host name and returns an output depending on the value it finds: Example input Example rendering A.9. Parsing Arrays While writing or modifying templates, you might encounter variables that return arrays. For example, host variables related to network interfaces, such as @host.interfaces or @host.bond_interfaces , return interface data grouped in an array. To extract a parameter value of a specific interface, use Ruby methods to parse the array. Finding the Correct Method to Parse an Array The following procedure is an example that you can use to find the relevant methods to parse arrays in your template. In this example, a report template is used, but the steps are applicable to other templates. To retrieve the NIC of a content host, in this example, using the @host.interfaces variable returns class values that you can then use to find methods to parse the array. Example input: Example rendering: In the Create Template window, click the Help tab and search for the ActiveRecord_Associations_CollectionProxy and Nic::Base classes. For ActiveRecord_Associations_CollectionProxy , in the Allowed methods or members column, you can view the following methods to parse the array: For Nic::Base , in the Allowed methods or members column, you can view the following method to parse the array: To iterate through an interface array, add the relevant methods to the ERB syntax: Example input: Example rendering: A.10. Example Template Snippets Checking if a Host Has Puppet and Puppetlabs Enabled The following example checks if the host has the Puppet and Puppetlabs repositories enabled: Capturing Major and Minor Versions of a Host's Operating System The following example shows how to capture the minor and major version of the host's operating system, which can be used for package related decisions: Importing Snippets to a Template The following example imports the subscription_manager_registration snippet to the template and indents it by four spaces: Conditionally Importing a Kickstart Snippet The following example imports the kickstart_networking_setup snippet if the host's subnet has the DHCP boot mode enabled: Parsing Values from Host Custom Facts You can use the host.facts variable to parse values from a host's facts and custom facts. In this example luks_stat is a custom fact that you can parse in the same manner as dmi::system::serial_number , which is a host fact: In this example, you can customize the Applicable Errata report template to parse for custom information about the kernel version of each host:
[ "<% if @host.operatingsystem.family == \"Redhat\" && @host.operatingsystem.major.to_i > 6 -%> systemctl <%= input(\"action\") %> <%= input(\"service\") %> <% else -%> service <%= input(\"service\") %> <%= input(\"action\") %> <% end -%>", "echo <%= @host.name %>", "host.example.com", "<% server_name = @host.fqdn %> <%= server_name %>", "host.example.com", "<%= @ example_incorrect_variable .fqdn -%>", "undefined method `fqdn' for nil:NilClass", "<%= \"line1\" %> <%= \"line2\" %>", "line1 line2", "<%= \"line1\" -%> <%= \"line2\" %>", "line1line2", "<%= @host.fqdn -%> <%= @host.ip -%>", "host.example.com10.10.181.216", "<%# A comment %>", "<%- load_hosts.each do |host| -%> <%- if host.build? %> <%= host.name %> build is in progress <%- end %> <%- end %>", "<%= input('cpus') %>", "<%- load_hosts().each_record do |host| -%> <%= host.name %>", "<% load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%= host.name %> <% end -%>", "<%- load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name ) -%> <%- end -%> <%= report_render -%>", "Server FQDN host1.example.com host2.example.com host3.example.com host4.example.com host5.example.com host6.example.com", "<%- load_hosts(search: input('host')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name, 'IP': host.ip ) -%> <%- end -%> <%= report_render -%>", "Server FQDN,IP host1.example.com , 10.8.30.228 host2.example.com , 10.8.30.227 host3.example.com , 10.8.30.226 host4.example.com , 10.8.30.225 host5.example.com , 10.8.30.224 host6.example.com , 10.8.30.223", "<%= report_render -%>", "truthy?(\"true\") => true truthy?(1) => true truthy?(\"false\") => false truthy?(0) => false", "falsy?(\"true\") => false falsy?(1) => false falsy?(\"false\") => true falsy?(0) => true", "<% @host.ip.split('.').last %>", "<% load_hosts().each_record do |host| -%> <% if @host.name == \" host1.example.com \" -%> <% result=\"positive\" -%> <% else -%> <% result=\"negative\" -%> <% end -%> <%= result -%>", "host1.example.com positive", "<%= @host.interfaces -%>", "<Nic::Base::ActiveRecord_Associations_CollectionProxy:0x00007f734036fbe0>", "[] each find_in_batches first map size to_a", "alias? attached_devices attached_devices_identifiers attached_to bond_options children_mac_addresses domain fqdn identifier inheriting_mac ip ip6 link mac managed? mode mtu nic_delay physical? primary provision shortname subnet subnet6 tag virtual? vlanid", "<% load_hosts().each_record do |host| -%> <% host.interfaces.each do |iface| -%> iface.alias?: <%= iface.alias? %> iface.attached_to: <%= iface.attached_to %> iface.bond_options: <%= iface.bond_options %> iface.children_mac_addresses: <%= iface.children_mac_addresses %> iface.domain: <%= iface.domain %> iface.fqdn: <%= iface.fqdn %> iface.identifier: <%= iface.identifier %> iface.inheriting_mac: <%= iface.inheriting_mac %> iface.ip: <%= iface.ip %> iface.ip6: <%= iface.ip6 %> iface.link: <%= iface.link %> iface.mac: <%= iface.mac %> iface.managed?: <%= iface.managed? %> iface.mode: <%= iface.mode %> iface.mtu: <%= iface.mtu %> iface.physical?: <%= iface.physical? %> iface.primary: <%= iface.primary %> iface.provision: <%= iface.provision %> iface.shortname: <%= iface.shortname %> iface.subnet: <%= iface.subnet %> iface.subnet6: <%= iface.subnet6 %> iface.tag: <%= iface.tag %> iface.virtual?: <%= iface.virtual? %> iface.vlanid: <%= iface.vlanid %> <%- end -%>", "host1.example.com iface.alias?: false iface.attached_to: iface.bond_options: iface.children_mac_addresses: [] iface.domain: iface.fqdn: host1.example.com iface.identifier: ens192 iface.inheriting_mac: 00:50:56:8d:4c:cf iface.ip: 10.10.181.13 iface.ip6: iface.link: true iface.mac: 00:50:56:8d:4c:cf iface.managed?: true iface.mode: balance-rr iface.mtu: iface.physical?: true iface.primary: true iface.provision: true iface.shortname: host1.example.com iface.subnet: iface.subnet6: iface.tag: iface.virtual?: false iface.vlanid:", "<% pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = pm_set || host_param_true?('force-puppet') puppetlabs_enabled = host_param_true?('enable-puppetlabs-repo') %>", "<% os_major = @host.operatingsystem.major.to_i os_minor = @host.operatingsystem.minor.to_i %> <% if ((os_minor < 2) && (os_major < 14)) -%> <% end -%>", "<%= indent 4 do snippet 'subscription_manager_registration' end %>", "<% subnet = @host.subnet %> <% if subnet.respond_to?(:dhcp_boot_mode?) -%> <%= snippet 'kickstart_networking_setup' %> <% end -%>", "'Serial': host.facts['dmi::system::serial_number'], 'Encrypted': host.facts['luks_stat'],", "<%- report_row( 'Host': host.name, 'Operating System': host.operatingsystem, 'Kernel': host.facts['uname::release'], 'Environment': host.lifecycle_environment, 'Erratum': erratum.errata_id, 'Type': erratum.errata_type, 'Published': erratum.issued, 'Applicable since': erratum.created_at, 'Severity': erratum.severity, 'Packages': erratum.package_names, 'CVEs': erratum.cves, 'Reboot suggested': erratum.reboot_suggested, ) -%>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/template_writing_reference_managing-hosts
Chapter 183. JSon Fastjson DataFormat
Chapter 183. JSon Fastjson DataFormat Available as of Camel version 2.20 Fastjson is a Data Format which uses the Fastjson Library from("activemq:My.Queue"). marshal().json(JsonLibrary.Fastjson). to("mqseries:Another.Queue"); 183.1. Fastjson Options The JSon Fastjson dataformat supports 19 options, which are listed below. Name Default Java Type Description objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper true Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. prettyPrint false Boolean To enable pretty printing output nicely formatted. Is by default false. library XStream JsonLibrary Which json library to use. unmarshalTypeName String Class name of the java type to use when unarmshalling jsonView Class When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL allowJmsType false Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionTypeName String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList false Boolean To unarmshal to a List of Map or a List of Pojo. enableJaxbAnnotationModule false Boolean Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma permissions String Adds permissions that controls which Java packages and classes XStream is allowed to use during unmarshal from xml/json to Java beans. A permission must be configured either here or globally using a JVM system property. The permission can be specified in a syntax where a plus sign is allow, and minus sign is deny. Wildcards is supported by using . as prefix. For example to allow com.foo and all subpackages then specfy com.foo.. Multiple permissions can be configured separated by comma, such as com.foo.,-com.foo.bar.MySecretBean. The following default permission is always included: -,java.lang.,java.util. unless its overridden by specifying a JVM system property with they key org.apache.camel.xstream.permissions. allowUnmarshallType false Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 183.2. Spring Boot Auto-Configuration The component supports 20 options, which are listed below. Name Description Default Type camel.dataformat.json-fastjson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.json-fastjson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.json-fastjson.collection-type-name Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.json-fastjson.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.json-fastjson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.json-fastjson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.json-fastjson.enable-jaxb-annotation-module Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. false Boolean camel.dataformat.json-fastjson.enabled Whether to enable auto configuration of the json-fastjson data format. This is enabled by default. Boolean camel.dataformat.json-fastjson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL String camel.dataformat.json-fastjson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations Class camel.dataformat.json-fastjson.library Which json library to use. JsonLibrary camel.dataformat.json-fastjson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.json-fastjson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.json-fastjson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.json-fastjson.permissions Adds permissions that controls which Java packages and classes XStream is allowed to use during unmarshal from xml/json to Java beans. A permission must be configured either here or globally using a JVM system property. The permission can be specified in a syntax where a plus sign is allow, and minus sign is deny. Wildcards is supported by using . as prefix. For example to allow com.foo and all subpackages then specfy com.foo.. Multiple permissions can be configured separated by comma, such as com.foo.,-com.foo.bar.MySecretBean. The following default permission is always included: -,java.lang.,java.util. unless its overridden by specifying a JVM system property with they key org.apache.camel.xstream.permissions. String camel.dataformat.json-fastjson.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-fastjson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. String camel.dataformat.json-fastjson.unmarshal-type-name Class name of the java type to use when unarmshalling String camel.dataformat.json-fastjson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.json-fastjson.use-list To unarmshal to a List of Map or a List of Pojo. false Boolean 183.3. Dependencies To use Fastjson in your camel routes you need to add the dependency on camel-fastjson which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-fastjson</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>
[ "from(\"activemq:My.Queue\"). marshal().json(JsonLibrary.Fastjson). to(\"mqseries:Another.Queue\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-fastjson</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/json-fastjson-dataformat
Chapter 3. Red Hat build of OpenJDK 17.0.8.1 release notes
Chapter 3. Red Hat build of OpenJDK 17.0.8.1 release notes Review the following release note to understand changes from the Red Hat build of OpenJDK 17.0.8.1 patch release. Note For all the other changes and security fixes, see OpenJDK 17.0.8.1 Released . Fixed Invalid CEN header error on valid .zip files Red Hat build of OpenJDK 17.0.8 introduced additional validation checks on the ZIP64 fields of .zip files (JDK-8302483). However, these additional checks caused validation failures on some valid .zip files with the following error message: Invalid CEN header (invalid zip64 extra data field size) . To fix this issue, Red Hat build of OpenJDK 17.0.8.1 supports zero-length headers and the additional padding that some ZIP64 creation tools produce. From Red Hat build of OpenJDK 17.0.8 onward, you can disable these checks by setting the jdk.util.zip.disableZip64ExtraFieldValidation system property to true . See JDK-8313765 (JDK Bug System) Increased default value of jdk.jar.maxSignatureFileSize system property Red Hat build of OpenJDK 17.0.8 introduced a jdk.jar.maxSignatureFileSize system property for configuring the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file ( JDK-8300596 ). By default, the jdk.jar.maxSignatureFileSize property was set to 8000000 bytes (8 MB), which was too small for some JAR files. Red Hat build of OpenJDK 17.0.8.1 increases the default value of the jdk.jar.maxSignatureFileSize property to 16000000 bytes (16 MB). See JDK-8313216 (JDK Bug System) Advisories related to Red Hat build of OpenJDK 17.0.8.1 The following advisories have been issued to bug fixes and CVE fixes included in this release: RHBA-2023:5226 RHBA-2023:5228
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.8/openjdk-17-0-8-1-release-notes_openjdk
Chapter 1. Configuring Jenkins images
Chapter 1. Configuring Jenkins images OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery. The image is based on the Red Hat Universal Base Images (UBI). OpenShift Container Platform follows the LTS release of Jenkins. OpenShift Container Platform provides an image that contains Jenkins 2.x. The OpenShift Container Platform Jenkins images are available on Quay.io or registry.redhat.io . For example: USD podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform container image registry. Additionally, you can create an image stream that points to the image, either in your container image registry or at the external location. Your OpenShift Container Platform resources can then reference the image stream. But for convenience, OpenShift Container Platform provides image streams in the openshift namespace for the core Jenkins image as well as the example Agent images provided for OpenShift Container Platform integration with Jenkins. 1.1. Configuration and customization You can manage Jenkins authentication in two ways: OpenShift Container Platform OAuth authentication provided by the OpenShift Container Platform Login plugin. Standard authentication provided by Jenkins. 1.1.1. OpenShift Container Platform OAuth authentication OAuth authentication is activated by configuring options on the Configure Global Security panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH environment variable on the Jenkins Deployment configuration to anything other than false . This activates the OpenShift Container Platform Login plugin, which retrieves the configuration information from pod data or by interacting with the OpenShift Container Platform API server. Valid credentials are controlled by the OpenShift Container Platform identity provider. Jenkins supports both browser and non-browser access. Valid users are automatically added to the Jenkins authorization matrix at log in, where OpenShift Container Platform roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined admin , edit , and view . The login plugin executes self-SAR requests against those roles in the project or namespace that Jenkins is running in. Users with the admin role have the traditional Jenkins administrative user permissions. Users with the edit or view role have progressively fewer permissions. The default OpenShift Container Platform admin , edit , and view roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable. When running Jenkins in an OpenShift Container Platform pod, the login plugin looks for a config map named openshift-jenkins-login-plugin-config in the namespace that Jenkins is running in. If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically: The login plugin treats the key and value pairs in the config map as Jenkins permission to OpenShift Container Platform role mappings. The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character. If you want to add the Overall Jenkins Administer permission to an OpenShift Container Platform role, the key should be Overall-Administer . To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide. The value of the key and value pair is the list of OpenShift Container Platform roles the permission should apply to, with each role separated by a comma. If you want to add the Overall Jenkins Administer permission to both the default admin and edit roles, as well as a new Jenkins role you have created, the value for the key Overall-Administer would be admin,edit,jenkins . Note The admin user that is pre-populated in the OpenShift Container Platform Jenkins image with administrative privileges is not given those privileges when OpenShift Container Platform OAuth is used. To grant these permissions the OpenShift Container Platform cluster administrator must explicitly define that user in the OpenShift Container Platform identity provider and assign the admin role to the user. Jenkins users' permissions that are stored can be changed after the users are initially established. The OpenShift Container Platform Login plugin polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the time the plugin polls OpenShift Container Platform. You can control how often the polling occurs with the OPENSHIFT_PERMISSIONS_POLL_INTERVAL environment variable. The default polling interval is five minutes. The easiest way to create a new Jenkins service using OAuth authentication is to use a template. 1.1.2. Jenkins authentication Jenkins authentication is used by default if the image is run directly, without using a template. The first time Jenkins starts, the configuration is created along with the administrator user and password. The default user credentials are admin and password . Configure the default password by setting the JENKINS_PASSWORD environment variable when using, and only when using, standard Jenkins authentication. Procedure Create a Jenkins application that uses standard Jenkins authentication by entering the following command: USD oc new-app -e \ JENKINS_PASSWORD=<password> \ ocp-tools-4/jenkins-rhel8 1.2. Jenkins environment variables The Jenkins server can be configured with the following environment variables: Variable Definition Example values and settings OPENSHIFT_ENABLE_OAUTH Determines whether the OpenShift Container Platform Login plugin manages authentication when logging in to Jenkins. To enable, set to true . Default: false JENKINS_PASSWORD The password for the admin user when using standard Jenkins authentication. Not applicable when OPENSHIFT_ENABLE_OAUTH is set to true . Default: password JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and may be used to override any of them if necessary. Separate each additional option with a space; if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value . JENKINS_OPTS Specifies arguments to Jenkins. INSTALL_PLUGINS Specifies additional Jenkins plugins to install when the container is first run or when OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS is set to true . Plugins are specified as a comma-delimited list of name:version pairs. Example setting: git:3.7.0,subversion:2.10.2 . OPENSHIFT_PERMISSIONS_POLL_INTERVAL Specifies the interval in milliseconds that the OpenShift Container Platform Login plugin polls OpenShift Container Platform for the permissions that are associated with each user that is defined in Jenkins. Default: 300000 - 5 minutes OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG When running this image with an OpenShift Container Platform persistent volume (PV) for the Jenkins configuration directory, the transfer of configuration from the image to the PV is performed only the first time the image starts because the PV is assigned when the persistent volume claim (PVC) is created. If you create a custom image that extends this image and updates the configuration in the custom image after the initial startup, the configuration is not copied over unless you set this environment variable to true . Default: false OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS When running this image with an OpenShift Container Platform PV for the Jenkins configuration directory, the transfer of plugins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plugins in the custom image after the initial startup, the plugins are not copied over unless you set this environment variable to true . Default: false ENABLE_FATAL_ERROR_LOG_FILE When running this image with an OpenShift Container Platform PVC for the Jenkins configuration directory, this environment variable allows the fatal error log file to persist when a fatal error occurs. The fatal error file is saved at /var/lib/jenkins/logs . Default: false AGENT_BASE_IMAGE Setting this value overrides the image used for the jnlp container in the sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the jenkins-agent-base-rhel8:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest JAVA_BUILDER_IMAGE Setting this value overrides the image used for the java-builder container in the java-builder sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the java:latest image stream tag in the openshift namespace is used. Default: image-registry.openshift-image-registry.svc:5000/openshift/java:latest JAVA_FIPS_OPTIONS Setting this value controls how the JVM operates when running on a FIPS node. For more information, see Configure Red Hat build of OpenJDK 11 in FIPS mode . Default: -Dcom.redhat.fips=false 1.3. Providing Jenkins cross project access If you are going to run Jenkins somewhere other than your same project, you must provide an access token to Jenkins to access your project. Procedure Identify the secret for the service account that has appropriate permissions to access the project that Jenkins must access by entering the following command: USD oc describe serviceaccount jenkins Example output Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp In this case the secret is named jenkins-token-uyswp . Retrieve the token from the secret by entering the following command: USD oc describe secret <secret name from above> Example output Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA The token parameter contains the token value Jenkins requires to access the project. 1.4. Jenkins cross volume mount points The Jenkins image can be run with mounted volumes to enable persistent storage for the configuration: /var/lib/jenkins is the data directory where Jenkins stores configuration files, including job definitions. 1.5. Customizing the Jenkins image through source-to-image To customize the official OpenShift Container Platform Jenkins image, you can use the image as a source-to-image (S2I) builder. You can use S2I to copy your custom Jenkins jobs definitions, add additional plugins, or replace the provided config.xml file with your own, custom, configuration. To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure: plugins This directory contains those binary Jenkins plugins you want to copy into Jenkins. plugins.txt This file lists the plugins you want to install using the following syntax: configuration/jobs This directory contains the Jenkins job definitions. configuration/config.xml This file contains your custom Jenkins configuration. The contents of the configuration/ directory is copied to the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml , there. Sample build configuration to customize the Jenkins image in OpenShift Container Platform apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest 1 The source parameter defines the source Git repository with the layout described above. 2 The strategy parameter defines the original Jenkins image to use as a source image for the build. 3 The output parameter defines the resulting, customized Jenkins image that you can use in deployment configurations instead of the official Jenkins image. 1.6. Configuring the Jenkins Kubernetes plugin The OpenShift Jenkins image includes the preinstalled Kubernetes plugin for Jenkins so that Jenkins agents can be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform. To use the Kubernetes plugin, OpenShift Container Platform provides an OpenShift Agent Base image that is suitable for use as a Jenkins agent. Important OpenShift Container Platform 4.11 moves the OpenShift Jenkins and OpenShift Agent Base images to the ocp-tools-4 repository at registry.redhat.io so that Red Hat can produce and update the images outside the OpenShift Container Platform lifecycle. Previously, these images were in the OpenShift Container Platform install payload and the openshift4 repository at registry.redhat.io . The OpenShift Jenkins Maven and NodeJS Agent images were removed from the OpenShift Container Platform 4.11 payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. The Maven and Node.js agent images are automatically configured as Kubernetes pod template images within the OpenShift Container Platform Jenkins image configuration for the Kubernetes plugin. That configuration includes labels for each image that you can apply to any of your Jenkins jobs under their Restrict where this project can be run setting. If the label is applied, jobs run under an OpenShift Container Platform pod running the respective agent image. Important In OpenShift Container Platform 4.10 and later, the recommended pattern for running Jenkins agents using the Kubernetes plugin is to use pod templates with both jnlp and sidecar containers. The jnlp container uses the OpenShift Container Platform Jenkins Base agent image to facilitate launching a separate pod for your build. The sidecar container image has the tools needed to build in a particular language within the separate pod that was launched. Many container images from the Red Hat Container Catalog are referenced in the sample image streams in the openshift namespace. The OpenShift Container Platform Jenkins image has a pod template named java-build with sidecar containers that demonstrate this approach. This pod template uses the latest Java version provided by the java image stream in the openshift namespace. The Jenkins image also provides auto-discovery and auto-configuration of additional agent images for the Kubernetes plugin. With the OpenShift Container Platform sync plugin, on Jenkins startup, the Jenkins image searches within the project it is running, or the projects listed in the plugin's configuration, for the following items: Image streams with the role label set to jenkins-agent . Image stream tags with the role annotation set to jenkins-agent . Config maps with the role label set to jenkins-agent . When the Jenkins image finds an image stream with the appropriate label, or an image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plugin configuration. This way, you can assign your Jenkins jobs to run in a pod running the container image provided by the image stream. The name and image references of the image stream, or image stream tag, are mapped to the name and image fields in the Kubernetes plugin pod template. You can control the label field of the Kubernetes plugin pod template by setting an annotation on the image stream, or image stream tag object, with the key agent-label . Otherwise, the name is used as the label. Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. When it finds a config map with the appropriate label, the Jenkins image assumes that any values in the key-value data payload of the config map contain Extensible Markup Language (XML) consistent with the configuration format for Jenkins and the Kubernetes plugin pod templates. One key advantage of config maps over image streams and image stream tags is that you can control all the Kubernetes plugin pod template parameters. Sample config map for jenkins-agent kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> The following example shows two containers that reference image streams in the openshift namespace. One container handles the JNLP contract for launching Pods as Jenkins Agents. The other container uses an image with tools for building code in a particular coding language: kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\USD(JENKINS_SECRET) \USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag deletes any existing PodTemplate from the configuration of the Kubernetes plugin. If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin. If you create appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or add labels after their initial creation, this results in the creation of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration. The changes also override any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entry point. For more details, see the official Jenkins documentation . Additional resources Important changes to OpenShift Jenkins images 1.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod. 1.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral With both templates, you can run oc describe on them to see all the parameters available for overriding. For example: USD oc describe jenkins-ephemeral 1.9. Using the Jenkins Kubernetes plugin In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Important OpenShift Container Platform 4.11 removed the OpenShift Jenkins Maven and NodeJS Agent images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. Sample BuildConfig that uses the Jenkins Kubernetes plugin kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the preceding example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes plugin, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile. Upstream Jenkins has more recently introduced a YAML declarative format for defining a podTemplate pipeline DSL in-line with your pipelines. An example of this format, using the sample java-builder pod template that is defined in the OpenShift Container Platform Jenkins image: def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml """ apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\USD(JENKINS_SECRET)', '\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true """ } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container("java") { sh "mvn --version" } } } } } Additional resources Important changes to OpenShift Jenkins images 1.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 1.11. Additional resources See Base image options for more information about the Red Hat Universal Base Images (UBI). Important changes to OpenShift Jenkins images
[ "podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>", "oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8", "oc describe serviceaccount jenkins", "Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp", "oc describe secret <secret name from above>", "Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA", "pluginId:pluginVersion", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "oc new-app jenkins-persistent", "oc new-app jenkins-ephemeral", "oc describe jenkins-ephemeral", "kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange", "def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/jenkins/images-other-jenkins
Chapter 5. New features and enhancements
Chapter 5. New features and enhancements 5.1. Red Hat Enterprise Linux 8.1 for SAP Solutions You can use live patching to patch critical CVEs in the kernel without interrupting business critical SAP applications. With this enhancement, interruptions that result from system reboots are minimized. In releases, the initialization of in-memory databases, such as SAP HANA, could take several hours to load data into memory after an outage. SAPInstance: Integrating the upstream patch for systemd-based SAP Start-Up Framework . The resource-agents-sap-hana and resource-agents-sap-hana-scaleout packages provide resource agents for managing SAP HANA System Replication setups in combination with the RHEL HA Add-On. A new rhel-system-roles-sap package is now available for Red Hat Enterprise Linux 8 as a Technology Preview. The rhel-system-roles-sap package provides Red Hat Enterprise Linux System Roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Note Access is limited to RHEL for SAP Solutions offerings. Contact Red Hat Customer Support if you need assistance with your subscription. This enhancement update adds rhel-system-roles-sap to Red Hat Enterprise Linux 8 for SAP Solutions The following new roles are now available: sap-preconfigure sap-netweaver-preconfigure sap-hana-preconfigure An update for resource-agents-sap-hana-scaleout is now available for Red Hat Enterprise Linux 8.1 Extended Update Support. The resource-agents-sap-hana-scaleout packages provide an SAP HANA scale-out resource agent interface with Pacemaker that allows SAP HANA scale-out instances to be managed in a cluster environment. For more information, see Red Hat Enterprise Linux HA Solution for SAP HANA Scale-Out and System Replication . Additional resources Release Notes for Red Hat Enterprise Linux 8.1 5.2. Red Hat Enterprise Linux 8.2 for SAP Solutions SAP HANA users can now use the RHEL in-place upgrade to upgrade SAP environments from RHEL 7 to RHEL 8. For more information, see How to in-place upgrade SAP environments from RHEL 7 to RHEL 8 . Introducing support for IBM virtual Persistent Memory (vPMEM) and increasing maximum amount of supported logical CPUs and physical memory as an enhancement to IBM advanced virtualization platform (PowerVM) on Power9 processor. Introducing support for Intel's 3rd Generation Intel Xeon Scalable Processors (formerly code-named Cooper Lake). RHEL System Roles for SAP, earlier shipped as a Tech Preview in RHEL 8.1 for SAP Solutions, are now General Available (GA). For more information, see Red Hat Enterprise Linux System Roles for SAP . With this release, cgroup v2 is now fully supported. You can use V2 to protect the memory, where SAP Applications store data, for fast access from the Linux kernel memory management to obtain higher performance for SAP systems. The newly introduced PCP HA Cluster PMDA for high-availability / pacemaker clusters allows customers using RHEL HA solutions for SAP to view their cluster health, node health, resource health and location constraints in near real-time, and marks an integral part of the Azure Monitor for SAP Solutions. By introducing a SAP application-focused view into Red Hat Insights, SAP administrators can automatically detect and display all of their SAP applications across numerous environments, to include accessing their application status and risk information, from a single panel. The resource agents for managing SAP HANA Scale-Out System Replication have been updated to also support HANA Multitarget Replication, with manual takeover. For more invormation, see Red Hat Enterprise Linux HA Solution for SAP HANA Scale-Out and System Replication . Additional resources Release Notes for Red Hat Enterprise Linux 8.2 5.3. Red Hat Enterprise Linux 8.4 for SAP Solutions With the enhancement to Red Hat Enterprise Linux System Roles for SAP, customers can now not only configure, but also verify existing RHEL systems to be configured in-line with SAP Best Practices. Adding further automation to the RHEL HA solutions for SAP HANA, allowing pacemaker-based clusters configured for SAP HANA Multitarget System Replication to promote its secondary SAP HANA instances automatically as the new primary node for a third site, if the original primary instance fails. Enhancing support of Red Hat Smart Management and Red Hat Insights for SAP workloads. An update for the resource-agents-sap package is now available for Red Hat Enterprise Linux 8.4 Extended Update Support. The resource-agents-sap package contains SAP resource agents interface with Pacemaker to allow SAP instances to be managed in a cluster environment. An update for rhel-system-roles-sap is now available for Red Hat Enterprise Linux 8.4 Extended Update Support. The rhel-system-roles-sap package provides Red Hat Enterprise Linux (RHEL) System Roles for SAP that can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Note Access is limited to RHEL for SAP Solutions offerings. Contact Red Hat Customer Support if you need assistance with your subscription. An update for resource-agents-sap-hana-scaleout is now available for Red Hat Enterprise Linux 8. The resource agents for managing HANA Scale-Out System Replication have been updated to also support HANA Multitarget Replication. For more information, Red Hat Enterprise Linux HA Solution for SAP HANA Scale-Out and System Replication . RHEL customers running SAP HANA still on RHEL 7.9 can now upgrade their operating system directly to RHEL 8.4 using the in-place upgrade tooling (LEAPP). The package compat-sap-c++-10 is now also available for RHEL 8.4 and later on platform s390x (IBM System Z). Additional resources Release Notes for Red Hat Enterprise Linux 8.4 5.4. Red Hat Enterprise Linux 8.6 for SAP Solutions Adding support for SAP HANA cost-optimized RHEL HA scenarios, enabling customers to: Seamlessly run a QA/Test instance of SAP HANA on the secondary instance instead of idling the system. Have a S/4HANA application server and SAP HANA database managed within the same cluster. Run an SAP NetWeaver primary application server and additional application server on the same cluster node. For more information, see Supported HA Scenarios for SAP HANA, SAP S/4HANA, and SAP NetWeaver . Introduction of RHEL HA fencing agents for IBM Cloud Virtual Server (VPC) and IBM Power Systems Virtual Servers (VS), to allow secure and reliable setup of highly available SAP environments in context of IBM Cloud. Enhancing existing RHEL system roles for SAP by including the new role sap_hana_install , which can be used to install SAP HANA scale-up or scale-out database instances by means of Ansible automation. For more information, see Red Hat Enterprise Linux System Roles for SAP . Inclusion of Processor Counter Monitor (PCM) to ease monitoring of performance and energy metrics of Intel Core, Xeon, Atom and Xeon Phi processors, such as in the context of SAP HANA in-memory workloads. Starting with the latest SAP kernel packages / patch levels (shipping from April 2022 onward) SAP is supporting and enabling by default the systemd environment. All RHEL versions with Update Services for SAP Solutions, starting with RHEL 8.1, have been tested and verified by both Red Hat and SAP to assure the SAP changes with the new systemd based SAP startup framework run with no issues. Added in-place upgrade tool support for SAP HANA customers to go from RHEL 7.9 for SAP Solutions to RHEL 8.6 for SAP Solutions. For more information, see How to in-place upgrade SAP environments from RHEL 7 to RHEL 8 - Red Hat Customer Portal . With the release of RHEL 8.6, the location of sap.conf , which is used to permanently increase kernel.pid_max to ensure the number of tasks per user satisfies the need of the SAP HANA database, has changed from /etc/sysctl.d/ to /usr/lib/sysctl.d/ . For more information, see SAP Note 2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8 . Additional resources Release Notes for Red Hat Enterprise Linux 8.6 5.5. Red Hat Enterprise Linux 8.9 for SAP Solutions When using the HA solutions for managing HANA Multitarget System Replication, it is also possible to set up a separate inactive cluster for managing the HANA instances at the DR site, which can be activated manually in the event of the primary cluster becoming unavailable. For more details, please refer to Configuring SAP HANA Scale-Up Multitarget System Replication for disaster recovery . RHEL HA solutions for SAP now support managing SAP HANA Multitarget System Replication for both HANA Scale-Up and HANA Scale-Out environments, allowing for automated failover with 3 and more replicates. For more details, please refer to Multitarget System Replication . 5.6. Red Hat Enterprise Linux 8.10 for SAP Solutions The following enhancements have been made for the roles given below: collection : Ensures Ansible 2.16.1, 2.15.8, 2.14.12 (cve-2023-5764) compatibility. collection : Minimum Ansible version is now 2.14. preconfigure : Includes SLES related code. Configuring SLES managed nodes is nevertheless unsupported by Red Hat. sap_hana_preconfigure : Implements SAP HANA requirements for RHEL 8.8 and is less restrictive with RHEL versions that are not yet supported for SAP HANA. sap_ha_pacemaker_cluster : Improves VIP resource and constraint setup per platform. For more details refer to Red Hat Enterprise Linux System Roles for SAP . You need to enable standard repositories instead of the E4S variant for RHEL 8.10. For more details refer to RHEL for SAP Subscriptions and Repositories .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/8.x_release_notes/new_features_8.x_release_notes
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/securing_applications_and_management_interfaces_using_multiple_identity_stores/proc_providing-feedback-on-red-hat-documentation_default
Chapter 3. Cluster capabilities
Chapter 3. Cluster capabilities Cluster administrators can use cluster capabilities to enable or disable optional components prior to installation. Cluster administrators can enable cluster capabilities at anytime after installation. Note Cluster administrators cannot disable a cluster capability after it is enabled. 3.1. Selecting cluster capabilities You can select cluster capabilities by following one of the installation methods that include customizing your cluster, such as "Installing a cluster on AWS with customizations" or "Installing a cluster on GCP with customizations". During a customized installation, you create an install-config.yaml file that contains the configuration parameters for your cluster. Note If you customize your cluster by enabling or disabling specific cluster capabilities, you are responsible for manually maintaining your install-config.yaml file. New OpenShift Container Platform updates might declare new capability handles for existing components, or introduce new components altogether. Users who customize their install-config.yaml file should consider periodically updating their install-config.yaml file as OpenShift Container Platform is updated. You can use the following configuration parameters to select cluster capabilities: capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage 1 Defines a baseline set of capabilities to install. Valid values are None , vCurrent and v4.x . If you select None , all optional capabilities will be disabled. The default value is vCurrent , which enables all optional capabilities. Note v4.x refers to any value up to and including the current cluster version. For example, valid values for a OpenShift Container Platform 4.12 cluster are v4.11 and v4.12 . 2 Defines a list of capabilities to explicitly enable. These will be enabled in addition to the capabilities specified in baselineCapabilitySet . Note In this example, the default capability is set to v4.11 . The additionalEnabledCapabilities field enables additional capabilities over the default v4.11 capability set. The following table describes the baselineCapabilitySet values. Table 3.1. Cluster capabilities baselineCapabilitySet values description Value Description vCurrent Specify this option when you want to automatically add new, default capabilities that are introduced in new releases. v4.11 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.11. By specifying v4.11 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.11 are baremetal , MachineAPI , marketplace , and openshift-samples . v4.12 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.12. By specifying v4.12 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.12 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , and CSISnapshot . v4.13 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.13. By specifying v4.13 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.13 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , and NodeTuning . v4.14 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.14. By specifying v4.14 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.14 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , and DeploymentConfig . None Specify when the other sets are too large, and you do not need any capabilities or want to fine-tune via additionalEnabledCapabilities . Additional resources Installing a cluster on AWS with customizations Installing a cluster on GCP with customizations 3.2. Optional cluster capabilities in OpenShift Container Platform 4.14 Currently, cluster Operators provide the features for these optional capabilities. The following summarizes the features provided by each capability and what functionality you lose if it is disabled. Additional resources Cluster Operators reference 3.2.1. Bare-metal capability Purpose The Cluster Baremetal Operator provides the features for the baremetal capability. The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. The bare-metal capability is required for deployments using installer-provisioned infrastructure. Disabling the bare-metal capability can result in unexpected problems with these deployments. It is recommended that cluster administrators only disable the bare-metal capability during installations with user-provisioned infrastructure that do not have any BareMetalHost resources in the cluster. Important If the bare-metal capability is disabled, the cluster cannot provision or manage bare-metal nodes. Only disable the capability if there are no BareMetalHost resources in your deployment. The baremetal capability depends on the MachineAPI capability. If you enable the baremetal capability, you must also enable MachineAPI . Additional resources Deploying installer-provisioned clusters on bare metal Preparing for bare metal cluster installation Bare metal configuration 3.2.2. Build capability Purpose The Build capability enables the Build API. The Build API manages the lifecycle of Build and BuildConfig objects. Important If the Build capability is disabled, the cluster cannot use Build or BuildConfig resources. Disable the capability only if Build and BuildConfig resources are not required in the cluster. 3.2.3. Cluster Image Registry capability Purpose The Cluster Image Registry Operator provides features for the ImageRegistry capability. The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. In order to integrate the image registry into the cluster's user authentication and authorization system, a service account token secret and an image pull secret are generated for each service account in the cluster. Important If you disable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, the service account token secret and image pull secret are not generated for each service account. If you disable the ImageRegistry capability, you can reduce the overall resource footprint of OpenShift Container Platform in resource-constrained environments. Depending on your deployment, you can disable this component if you do not need it. Project cluster-image-registry-operator Additional resources Image Registry Operator in OpenShift Container Platform 3.2.4. Cluster storage capability Purpose The Cluster Storage Operator provides the features for the Storage capability. The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Important If the cluster storage capability is disabled, the cluster will not have a default storageclass or any CSI drivers. Users with administrator privileges can create a default storageclass and manually install CSI drivers if the cluster storage capability is disabled. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. 3.2.5. Console capability Purpose The Console Operator provides the features for the Console capability. The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Additional resources Web console overview 3.2.6. CSI snapshot controller capability Purpose The Cluster CSI Snapshot Controller Operator provides the features for the CSISnapshot capability. The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Additional resources CSI volume snapshots 3.2.7. DeploymentConfig capability Purpose The DeploymentConfig capability enables and manages the DeploymentConfig API. Important If the DeploymentConfig capability is disabled, the cluster cannot use DeploymentConfig resources. Disable the capability only if DeploymentConfig resources are not required in the cluster. 3.2.8. Insights capability Purpose The Insights Operator provides the features for the Insights capability. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Using Insights Operator 3.2.9. Machine API capability Purpose The machine-api-operator , cluster-autoscaler-operator , and cluster-control-plane-machine-set-operator Operators provide the features for the MachineAPI capability. You can disable this capability only if you install a cluster with user-provisioned infrastructure. The Machine API capability is responsible for all machine configuration and management in the cluster. If you disable the Machine API capability during installation, you need to manage all machine-related tasks manually. Additional resources Overview of machine management Machine API Operator Cluster Autoscaler Operator Control Plane Machine Set Operator 3.2.10. Marketplace capability Purpose The Marketplace Operator provides the features for the marketplace capability. The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. If you disable the marketplace capability, the Marketplace Operator does not create the openshift-marketplace namespace. Catalog sources can still be configured and managed on the cluster manually, but OLM depends on the openshift-marketplace namespace in order to make catalogs available to all namespaces on the cluster. Users with elevated permissions to create namespaces prefixed with openshift- , such as system or cluster administrators, can manually create the openshift-marketplace namespace. If you enable the marketplace capability, you can enable and disable individual catalogs by configuring the Marketplace Operator. Additional resources Red Hat-provided Operator catalogs 3.2.11. Node Tuning capability Purpose The Node Tuning Operator provides features for the NodeTuning capability. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. If you disable the NodeTuning capability, some default tuning settings will not be applied to the control-plane nodes. This might limit the scalability and performance of large clusters with over 900 nodes or 900 routes. Additional resources Using the Node Tuning Operator 3.2.12. OpenShift samples capability Purpose The Cluster Samples Operator provides the features for the openshift-samples capability. The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. If you disable the samples capability, users cannot access the image streams, samples, and templates it provides. Depending on your deployment, you might want to disable this component if you do not need it. Additional resources Configuring the Cluster Samples Operator 3.3. Additional resources Enabling cluster capabilities after installation
[ "capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installation_overview/cluster-capabilities
Chapter 8. Using Fernet keys for encryption in the overcloud
Chapter 8. Using Fernet keys for encryption in the overcloud Fernet is the default token provider, that replaces uuid . You can review your Fernet deployment and rotate the Fernet keys. Fernet uses three types of keys, which are stored in /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys . The highest-numbered directory contains the primary key, which generates new tokens and decrypts existing tokens. Fernet key rotation uses the following process: The primary key becomes the secondary key. The <system> issues a new primary key. The outgoing primary key is no longer valid. You can use secondary keys to decrypt tokens that were associated with primary keys, but you cannot issue new tokens. When you decide the length of Fernet key rotation cycles, follow the security posture of your organization. If your organization does not have guidance, a monthly rotation cycle is good practice for security reasons. 8.1. Reviewing the Fernet deployment To test that Fernet tokens are working correctly, retrieve the IP address of the Controller node, SSH into the Controller node, and review the settings of the token driver and provider. Procedure Retrieve the IP address of the Controller node: SSH into the Controller node: Retrieve the values of the token driver and provider settings: Test the Fernet provider: The result includes the long Fernet token. 8.2. Rotating the Fernet keys by using the Workflow service To ensure that the Fernet keys persist after stack updates, rotate the keys with the Workflow service (mistral). By default, director manages the overcloud Fernet keys in an environment file with the ManageKeystoneFernetKeys parameter. The Fernet keys are stored in the Workflow service, in the KeystoneFernetKeys section. Procedure Review the existing Fernet keys: Identify the Fernet key location. Log in to a Controller node as the tripleo-admin user and use the crudini command to query the Fernet keys: Note The /etc/keystone/ directory refers to the container file system path. Inspect the current Fernet key directories: 0 - Contains the staged key, which becomes the primary key and is always numbered 0 . 1 - Contains the secondary key. 2 - Contains the primary key. This number increments each time that the keys rotate. The highest number always serves as the primary key. Note The maximum number of keys is set with max_active_keys property. The default is 5 keys. The keys propagate across all Controller nodes. Rotate the Fernet keys by using the workflow command: Verification Retrieve the ID and ensure that the workflow is successful. On the Controller node, review the number of Fernet keys, and compare with the result. 0 - Contains the staged key and always be numbered 0 . This key becomes a primary key during the rotation. 1 & 2 - Contain the secondary keys. 3 - Contains the primary key. This number increments each time the keys rotate. The highest number always serves as the primary key. Note The maximum number of keys is set with the max_active_keys property. The default is 5 keys. The keys propagate across all Controller nodes.
[ "[stack@director ~]USD source ~/stackrc [stack@director ~]USD openstack server list -------------------------------------- ------------------------- -------- ---------------------+ | ID | Name | Status | Networks | -------------------------------------- ------------------------- -------- ---------------------+ | 756fbd73-e47b-46e6-959c-e24d7fb71328 | overcloud-controller-0 | ACTIVE | ctlplane=192.0.2.16 | | 62b869df-1203-4d58-8e45-fac6cd4cfbee | overcloud-novacompute-0 | ACTIVE | ctlplane=192.0.2.8 | -------------------------------------- ------------------------- -------- ---------------------+", "[tripleo-admin@overcloud-controller-0 ~]USD ssh [email protected]", "[tripleo-admin@overcloud-controller-0 ~]USD sudo crudini --get /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf token driver sql [tripleo-admin@overcloud-controller-0 ~]USD sudo crudini --get /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf token provider fernet", "[tripleo-admin@overcloud-controller-0 ~]USD exit [stack@director ~]USD source ~/overcloudrc [stack@director ~]USD openstack token issue ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2016-09-20 05:26:17+00:00 | | id | gAAAAABX4LppE8vaiFZ992eah2i3edpO1aDFxlKZq6a_RJzxUx56QVKORrmW0-oZK3-Xuu2wcnpYq_eek2SGLz250eLpZOzxKBR0GsoMfxJU8mEFF8NzfLNcbuS-iz7SV-N1re3XEywSDG90JcgwjQfXW-8jtCm-n3LL5IaZexAYIw059T_-cd8 | | project_id | 26156621d0d54fc39bf3adb98e63b63d | | user_id | 397daf32cadd490a8f3ac23a626ac06c | ------------ -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "[stack@<undercloud_host> ~]USD ssh tripleo-admin@overcloud-controller-o [tripleo-admin@overcloud-controller-0 ~]USD sudo crudini --get /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf fernet_tokens key_repository /etc/keystone/fernet-keys", "[tripleo-admin@overcloud-controller-0 ~]USD sudo ls /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys 0 1 2", "[stack@director ~]USD source ~/stackrc [stack@director ~]USD openstack workflow execution create tripleo.fernet_keys.v1.rotate_fernet_keys {\"container\": \"overcloud\"} ------------------- -------------------------------------------+ | Field | Value | ------------------- -------------------------------------------+ | ID | 58c9c664-b966-4f82-b368-af5ed8de5b47 | | Workflow ID | 78f0990a-3d34-4bf2-a127-10c149bb275c | | Workflow name | tripleo.fernet_keys.v1.rotate_fernet_keys | | Description | | | Task Execution ID | <none> | | State | RUNNING | | State info | None | | Created at | 2017-12-20 11:13:50 | | Updated at | 2017-12-20 11:13:50 | ------------------- -------------------------------------------+", "[stack@director ~]USD openstack workflow execution show 58c9c664-b966-4f82-b368-af5ed8de5b47 ------------------- -------------------------------------------+ | Field | Value | ------------------- -------------------------------------------+ | ID | 58c9c664-b966-4f82-b368-af5ed8de5b47 | | Workflow ID | 78f0990a-3d34-4bf2-a127-10c149bb275c | | Workflow name | tripleo.fernet_keys.v1.rotate_fernet_keys | | Description | | | Task Execution ID | <none> | | State | SUCCESS | | State info | None | | Created at | 2017-12-20 11:13:50 | | Updated at | 2017-12-20 11:15:00 | ------------------- -------------------------------------------+", "[tripleo-admin@overcloud-controller-0 ~]USD sudo ls /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys 0 1 2 3" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly-using-fernet-keys-for-encryption-in-the-overcloud_security_and_hardening
Managing Red Hat Decision Manager and KIE Server settings
Managing Red Hat Decision Manager and KIE Server settings Red Hat Decision Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/index
Chapter 35. flavor
Chapter 35. flavor This chapter describes the commands under the flavor command. 35.1. flavor create Create new flavor Usage: Table 35.1. Positional Arguments Value Summary <flavor-name> New flavor name Table 35.2. Optional Arguments Value Summary -h, --help Show this help message and exit --id <id> Unique flavor id; auto creates a uuid (default: auto) --ram <size-mb> Memory size in mb (default 256m) --disk <size-gb> Disk size in gb (default 0g) --ephemeral <size-gb> Ephemeral disk size in gb (default 0g) --swap <size-mb> Additional swap space size in mb (default 0m) --vcpus <vcpus> Number of vcpus (default 1) --rxtx-factor <factor> Rx/tx factor (default 1.0) --public Flavor is available to other projects (default) --private Flavor is not available to other projects --property <key=value> Property to add for this flavor (repeat option to set multiple properties) --project <project> Allow <project> to access private flavor (name or id) (Must be used with --private option) --description <description> Description for the flavor.(supported by api versions 2.55 - 2.latest --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 35.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 35.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 35.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 35.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 35.2. flavor delete Delete flavor(s) Usage: Table 35.7. Positional Arguments Value Summary <flavor> Flavor(s) to delete (name or id) Table 35.8. Optional Arguments Value Summary -h, --help Show this help message and exit 35.3. flavor list List flavors Usage: Table 35.9. Optional Arguments Value Summary -h, --help Show this help message and exit --public List only public flavors (default) --private List only private flavors --all List all flavors, whether public or private --long List additional fields in output --marker <flavor-id> The last flavor id of the page --limit <num-flavors> Maximum number of flavors to display Table 35.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 35.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 35.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 35.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 35.4. flavor set Set flavor properties Usage: Table 35.14. Positional Arguments Value Summary <flavor> Flavor to modify (name or id) Table 35.15. Optional Arguments Value Summary -h, --help Show this help message and exit --no-property Remove all properties from this flavor (specify both --no-property and --property to remove the current properties before setting new properties.) --property <key=value> Property to add or modify for this flavor (repeat option to set multiple properties) --project <project> Set flavor access to project (name or id) (admin only) --description <description> Set description for the flavor.(supported by api versions 2.55 - 2.latest --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 35.5. flavor show Display flavor details Usage: Table 35.16. Positional Arguments Value Summary <flavor> Flavor to display (name or id) Table 35.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 35.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 35.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 35.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 35.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 35.6. flavor unset Unset flavor properties Usage: Table 35.22. Positional Arguments Value Summary <flavor> Flavor to modify (name or id) Table 35.23. Optional Arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from flavor (repeat option to unset multiple properties) --project <project> Remove flavor access from project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist.
[ "openstack flavor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--id <id>] [--ram <size-mb>] [--disk <size-gb>] [--ephemeral <size-gb>] [--swap <size-mb>] [--vcpus <vcpus>] [--rxtx-factor <factor>] [--public | --private] [--property <key=value>] [--project <project>] [--description <description>] [--project-domain <project-domain>] <flavor-name>", "openstack flavor delete [-h] <flavor> [<flavor> ...]", "openstack flavor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--public | --private | --all] [--long] [--marker <flavor-id>] [--limit <num-flavors>]", "openstack flavor set [-h] [--no-property] [--property <key=value>] [--project <project>] [--description <description>] [--project-domain <project-domain>] <flavor>", "openstack flavor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor>", "openstack flavor unset [-h] [--property <key>] [--project <project>] [--project-domain <project-domain>] <flavor>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/flavor
Chapter 3. Restarting the cluster gracefully
Chapter 3. Restarting the cluster gracefully This document describes the process to restart your cluster after a graceful shutdown. Even though the cluster is expected to be functional after the restart, the cluster might not recover due to unexpected conditions, for example: etcd data corruption during shutdown Node failure due to hardware Network connectivity issues If your cluster fails to recover, follow the steps to restore to a cluster state . 3.1. Prerequisites You have gracefully shut down your cluster . 3.2. Restarting the cluster You can restart your cluster after it has been shut down gracefully. Prerequisites You have access to the cluster as a user with the cluster-admin role. This procedure assumes that you gracefully shut down the cluster. Procedure Power on any cluster dependencies, such as external storage or an LDAP server. Start all cluster machines. Use the appropriate method for your cloud environment to start the machines, for example, from your cloud provider's web console. Wait approximately 10 minutes before continuing to check the status of control plane nodes. Verify that all control plane nodes are ready. USD oc get nodes -l node-role.kubernetes.io/master The control plane nodes are ready if the status is Ready , as shown in the following output: NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 75m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 75m v1.28.5 If the control plane nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. Get the list of current CSRs: USD oc get csr Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid CSR: USD oc adm certificate approve <csr_name> After the control plane nodes are ready, verify that all worker nodes are ready. USD oc get nodes -l node-role.kubernetes.io/worker The worker nodes are ready if the status is Ready , as shown in the following output: NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 64m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 64m v1.28.5 If the worker nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. Get the list of current CSRs: USD oc get csr Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid CSR: USD oc adm certificate approve <csr_name> Verify that the cluster started properly. Check that there are no degraded cluster Operators. USD oc get clusteroperators Check that there are no cluster Operators with the DEGRADED condition set to True . NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 59m cloud-credential 4.15.0 True False False 85m cluster-autoscaler 4.15.0 True False False 73m config-operator 4.15.0 True False False 73m console 4.15.0 True False False 62m csi-snapshot-controller 4.15.0 True False False 66m dns 4.15.0 True False False 76m etcd 4.15.0 True False False 76m ... Check that all nodes are in the Ready state: USD oc get nodes Check that the status for all nodes is Ready . NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 82m v1.28.5 ip-10-0-179-95.ec2.internal Ready worker 70m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 70m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 82m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 69m v1.28.5 If the cluster did not start properly, you might need to restore your cluster using an etcd backup. After the control plane and worker nodes are ready, mark all the nodes in the cluster as schedulable. Run the following command: for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm uncordon USD{node} ; done Additional resources See Restoring to a cluster state for how to use an etcd backup to restore if your cluster failed to recover after restarting.
[ "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 75m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 75m v1.28.5", "oc get csr", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc get nodes -l node-role.kubernetes.io/worker", "NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 64m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 64m v1.28.5", "oc get csr", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 59m cloud-credential 4.15.0 True False False 85m cluster-autoscaler 4.15.0 True False False 73m config-operator 4.15.0 True False False 73m console 4.15.0 True False False 62m csi-snapshot-controller 4.15.0 True False False 66m dns 4.15.0 True False False 76m etcd 4.15.0 True False False 76m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 82m v1.28.5 ip-10-0-179-95.ec2.internal Ready worker 70m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 70m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 82m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 69m v1.28.5", "for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm uncordon USD{node} ; done" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/backup_and_restore/graceful-restart-cluster
Chapter 8. LVM Configuration
Chapter 8. LVM Configuration LVM can be configured during the graphical installation process, the text-based installation process, or during a kickstart installation. You can use the utilities from the lvm package to create your own LVM configuration post-installation, but these instructions focus on using Disk Druid during installation to complete this task. Read Chapter 7, Logical Volume Manager (LVM) first to learn about LVM. An overview of the steps required to configure LVM include: Creating physical volumes from the hard drives. Creating volume groups from the physical volumes. Creating logical volumes from the volume groups and assign the logical volumes mount points. Note Although the following steps are illustrated during a GUI installation, the same can be done during a text-based installation. Two 9.1 GB SCSI drives ( /dev/sda and /dev/sdb ) are used in the following examples. They detail how to create a simple configuration using a single LVM volume group with associated logical volumes during installation. 8.1. Automatic Partitioning On the Disk Partitioning Setup screen, select Automatically partition . For Red Hat Enterprise Linux, LVM is the default method for disk partitioning. If you do not wish to have LVM implemented, or if you require RAID partitioning, manual disk partitioning through Disk Druid is required. The following properties make up the automatically created configuration: The /boot/ partition resides on its own non-LVM partition. In the following example, it is the first partition on the first drive ( /dev/sda1 ). Bootable partitions cannot reside on LVM logical volumes. A single LVM volume group ( VolGroup00 ) is created, which spans all selected drives and all remaining space available. In the following example, the remainder of the first drive ( /dev/sda2 ), and the entire second drive ( /dev/sdb1 ) are allocated to the volume group. Two LVM logical volumes ( LogVol00 and LogVol01 ) are created from the newly created spanned volume group. In the following example, the recommended swap space is automatically calculated and assigned to LogVol01 , and the remainder is allocated to the root file system, LogVol00 . Figure 8.1. Automatic LVM Configuration With Two SCSI Drives Note If enabling quotas are of interest to you, it may be best to modify the automatic configuration to include other mount points, such as /home/ or /var/ , so that each file system has its own independent quota configuration limits. In most cases, the default automatic LVM partitioning is sufficient, but advanced implementations could warrant modification or manual configuration of the LVM partition tables. Note If you anticipate future memory upgrades, leaving some free space in the volume group would allow for easy future expansion of the swap space logical volume on the system; in which case, the automatic LVM configuration should be modified to leave available space for future growth.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/lvm_configuration
Chapter 11. Creating and managing OSTree image updates
Chapter 11. Creating and managing OSTree image updates You can easily create and manage OSTree image updates for your RHEL for Edge systems and make them immediately available to RHEL for Edge devices. With OSTree, you can use image builder to create RHEL for Edge Commit or RHEL for Edge Container images as .tar files that contain OSTree commits. The OSTree update versioning system works as a "Git repository" that stores and versions the OSTree commits. The rpm-ostree image and package system then assembles the commits on the client device. When you create a new image with RHEL image builder to perform an update, RHEL image builder pulls updates from these repositories. 11.1. Basic concepts for OSTree Basic terms that OSTree and rpm-ostree use during image updates. rpm-ostree The technology on the edge device that handles how the OSTree commits are assembled on the device. It works as a hybrid between an image and a package system. With the rpm-ostree technology, you can make atomic upgrades and rollbacks to your system. OSTree OSTree is a technology that enables you to create commits and download bootable file system trees. You can also use it to deploy the trees and manage the boot loader configuration. Commit An OSTree commit contains a full operating system that is not directly bootable. To boot the system, you must deploy it, for example, with a RHEL Installable image. Reference It is also known as ref . An OSTree ref is similar to a Git branch and it is a name. The following reference names examples are valid: rhel/9/x86_64/edge ref-name app/org.gnome.Calculator/x86_64/stable ref-name-2 By default, RHEL image builder specifies rhel/8/USDARCH/edge as a path. The "USDARCH" value is determined by the host machine. Parent The parent argument is an OSTree commit to build a new commit with RHEL image builder. It retrieves a parent commit for the new commit that you are building. You can specify the parent commit as a ref value to be resolved and pulled, for example rhel/8/x86_64/edge . You can also use the Commit ID that you can find in the extracted .tar image file. Remote The http or https endpoint that hosts the OSTree content. This is analogous to the baseurl for a yum repository. Static delta Static deltas are a collection of updates generated between two OSTree commits. This enables the system client to fetch a smaller amount of files, which are larger in size. The static deltas updates are more network efficient because, when updating an ostree-based host, the system client will only fetch the objects from the new OSTree commit which do not exist on the system. Typically, the new OSTree commit contains many small files, which requires multiple TCP connections. Summary The summary file is a concise way of enumerating refs, checksums, and available static deltas in an OSTree repo. You can check the state of all the refs and static deltas available in an Ostree repo. However, you must generate the summary file every time a new ref, commit, or static-delta is added to the OSTree repo. 11.2. Creating OSTree repositories You can create OSTree repos with RHEL image builder by using either RHEL for Edge Commit (.tar) or RHEL for Edge Container (.tar) image types. These image types contain an OSTree repo that contains a single OSTree commit. You can extract the RHEL for Edge Commit (.tar) on a web server and it is ready to be served. You must import the RHEL for Edge Container (.tar) to a local container image storage or push the image to a container registry. After you start the container, it serves the commit over an integrated nginx web server. Use the RHEL for Edge Container (.tar) on a RHEL server with Podman to create an OSTree repo: Prerequisite You created a RHEL for Edge Container (.tar) image. Procedure Download the container image from RHEL image builder: Import the container into Podman: Start the container and make it available by using the port 8080 : Verification Check that the container is running: 11.3. Managing a centralized OSTree mirror For production environments, having a central OSTree mirror that serves all the commits has several advantages, including: Deduplicating and minimizing disk storage Optimizing the updates to clients by using static delta updates Pointing to a single OSTree mirror for their deployment life. To manage a centralized OSTree mirror, you must pull each commit from RHEL image builder into the centralized repository where it will be available to your users. Note You can also automate managing an OSTree mirror by using the osbuild.infra Ansible collection. See osbuild.infra Ansible . To create a centralized repository you can run the following commands directly on a web server: Procedure Create an empty blueprint, customizing it to use "rhel-94" as the distro : Push the blueprint to the server: Build a RHEL for Edge Commit ( .tar ) image from the blueprint you created: Retrieve the .tar file and decompress it to the disk: The /usr/share/nginx/html/repo location on disk will become the single OSTree repo for all refs and commits. Create another empty blueprint, customizing it to use "rhel-810" as the distro : Push the blueprint and create another RHEL for Edge Commit ( .tar ) image: Retrieve the .tar file and decompress it to the disk: Pull the commit to the local repo. By using ostree pull-local , you can copy the commit data from one local repo to another local repo. Optional: Inspect the status of the OSTree repo. The following is an output example: Update the RHEL 9.4 blueprint to include a new package and build a new commit, for example: Push the updated blueprint and create a new RHEL for Edge Commit ( .tar ) image, pointing the compose to the existing OSTree repo: Retrieve the .tar file and decompress it to the disk: Pull the commit to repo: Optional: Inspect the OSTree repo status again:
[ "composer-cli compose image <UUI>", "skopeo copy oci-archive:<UUI>-container.tar containers-storage:localhost/ostree", "podman run -rm -p 8080:8080 ostree", "podman ps -a", "name = \"minimal-rhel94\" description = \"minimal blueprint for ostree commit\" version = \"1.0.0\" modules = [] groups = [] distro = \"rhel-94\"", "composer-cli blueprints push minimal-rhel94.toml", "composer-cli compose start-ostree minimal-rhel94 edge-commit", "composer-cli compose image _<rhel-94-uuid> tar -xf <rhel-94-uuid>.tar -C /usr/share/nginx/html/", "name = \"minimal-rhel810\" description = \"minimal blueprint for ostree commit\" version = \"1.0.0\" modules = [] groups = [] distro = \"rhel-810\"", "*composer-cli blueprints push minimal-rhel810.toml* *composer-cli compose start-ostree minimal-rhel810 edge-commit*", "composer-cli compose image <rhel-810-uuid> tar -xf <rhel-810-uuid>.tar", "ostree --repo=/usr/share/nginx/html/repo pull-local repo", "ostree --repo=/usr/share/nginx/html/repo refs rhel/8/x86_64/edge rhel/9/x86_64/edge ostree --repo=/usr/share/nginx/html/repo show rhel/8/x86_64/edge commit f7d4d95465fbd875f6358141f39d0c573df6a321627bafde68c73850667e5443 ContentChecksum: 41bf2f8b442a770e9bf03e096a46a286f5836e0a0702b7c3516ef4e0acec2dea Date: 2023-09-15 16:17:04 +0000 Version: 8.10 (no subject) ostree --repo=/usr/share/nginx/html/repo show rhel/9/x86_64/edge commit 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 70235bfb9cae82c53f856183750e809becf0b9b076122b19c40fec92fc6d74c1 Date: 2023-09-15 15:30:24 +0000 Version: 9.4 (no subject)", "name = \"minimal-rhel94\" description = \"minimal blueprint for ostree commit\" version = \"1.1.0\" modules = [] groups = [] distro = \"rhel-94\" [[packages]] name = \"strace\" version = \"*\"", "composer-cli blueprints push minimal-rhel94.toml composer-cli compose start-ostree minimal-rhel94 edge-commit --url http://localhost/repo --ref rhel/9/x86_64/edge", "rm -rf repo composer-cli compose image <rhel-94-uuid> tar -xf <rhel-94-uuid>.tar", "ostree --repo=/usr/share/nginx/html/repo pull-local repo", "ostree --repo=/usr/share/nginx/html/repo refs rhel/8/x86_64/edge rhel/9/x86_64/edge ostree --repo=/usr/share/nginx/html/repo show rhel/8/x86_64/edge commit f7d4d95465fbd875f6358141f39d0c573df6a321627bafde68c73850667e5443 ContentChecksum: 41bf2f8b442a770e9bf03e096a46a286f5836e0a0702b7c3516ef4e0acec2dea Date: 2023-09-15 16:17:04 +0000 Version: 8.10 (no subject) ostree --repo=/usr/share/nginx/html/repo show rhel/9/x86_64/edge commit a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 Parent: 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 2335930df6551bf7808e49f8b35c45e3aa2a11a6c84d988623fd3f36df42a1f1 Date: 2023-09-15 18:21:31 +0000 Version: 9.4 (no subject) ostree --repo=/usr/share/nginx/html/repo log rhel/9/x86_64/edge commit a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 Parent: 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 2335930df6551bf7808e49f8b35c45e3aa2a11a6c84d988623fd3f36df42a1f1 Date: 2023-09-15 18:21:31 +0000 Version: 9.4 (no subject) commit 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ContentChecksum: 70235bfb9cae82c53f856183750e809becf0b9b076122b19c40fec92fc6d74c1 Date: 2023-09-15 15:30:24 +0000 Version: 9.4 (no subject) rpm-ostree db diff --repo=/usr/share/nginx/html/repo 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 ostree diff commit from: 89290dbfd6f749700c77cbc434c121432defb0c1c367532368eee170d9e53ea9 ostree diff commit to: a35c3b1a9e731622f32396bb1aa84c73b16bd9b9b423e09d72efaca11b0411c9 Added: elfutils-default-yama-scope-0.188-3.el9.noarch elfutils-libs-0.188-3.el9.x86_64 strace-5.18-2.el9.x86_64" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_installing_and_managing_rhel_for_edge_images/creating-and-managing-ostree-image-updates_composing-installing-managing-rhel-for-edge-images
5.130. kdebase
5.130. kdebase 5.130.1. RHBA-2012:1371 - kdebase bug fix update Updated kdebase packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The K Desktop Environment (KDE) is a graphical desktop environment for the X Window System. The kdebase packages include core applications for KDE. Bug Fixes BZ# 608007 Prior to this update, the Konsole context menu item "Show menu bar" was always checked in new windows even if this menu item was disabled before. This update modifies the underlying code to handle the menu item "Show menu bar" as expected. BZ# 729307 Prior to this update, users could not define a default size for xterm windows when using the Konsole terminal in KDE. This update modifies the underlying code and adds the functionality to define a default size. All users of kdebase are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/kdebase
23.2. Types
23.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with Postfix. Different types all you to configure flexible access: postfix_etc_t This type is used for configuration files for Postfix in the /etc/postfix/ directory. postfix_data_t This type is used for Postfix data files in the /var/lib/postfix/ directory. postfix_var_run_t This type is used for Postfix files stored in the /run/ directory. postfix_initrc_exec_t The Postfix executable files are labeled with the postfix_initrc_exec_t type. When executed, they transition to the postfix_initrc_t domain. postfix_spool_t This type is used for Postfix files stored in the /var/spool/ directory. Note To see the full list of files and their types for Postfix, enter the following command:
[ "~]USD grep postfix /etc/selinux/targeted/contexts/files/file_contexts" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-postfix-types
Chapter 2. Navigating the Red Hat Hybrid Cloud Console
Chapter 2. Navigating the Red Hat Hybrid Cloud Console From within the Red Hat Hybrid Cloud Console, you can take guided tours of the console and its services, search for information to help you achieve your goals, or start using a service. Here is a list of some of the tasks that you can perform: Find a service on the All Services page and make it a favorite to easily find later. Configure the following global settings from the Settings menu under the gear icon: Notifications: Configure how and when you receive notifications about important events that occur in your console services. Configure user access from the Identity & Access Management menu under the gear icon. Configure your preferences for notifications from User Preferences , under your profile menu. Review updates to the console, take product tours, and submit feedback.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/getting_started_with_the_red_hat_hybrid_cloud_console_with_fedramp/navigating-hybrid-cloud-console_getting-started
Chapter 12. Network Observability CLI
Chapter 12. Network Observability CLI 12.1. Installing the Network Observability CLI The Network Observability CLI ( oc netobserv ) is deployed separately from the Network Observability Operator. The CLI is available as an OpenShift CLI ( oc ) plugin. It provides a lightweight way to quickly debug and troubleshoot with network observability. 12.1.1. About the Network Observability CLI You can quickly debug and troubleshoot networking issues by using the Network Observability CLI ( oc netobserv ). The Network Observability CLI is a flow and packet visualization tool that relies on eBPF agents to stream collected data to an ephemeral collector pod. It requires no persistent storage during the capture. After the run, the output is transferred to your local machine. This enables quick, live insight into packets and flow data without installing the Network Observability Operator. Important CLI capture is meant to run only for short durations, such as 8-10 minutes. If it runs for too long, it can be difficult to delete the running process. 12.1.2. Installing the Network Observability CLI Installing the Network Observability CLI ( oc netobserv ) is a separate procedure from the Network Observability Operator installation. This means that, even if you have the Operator installed from OperatorHub, you need to install the CLI separately. Note You can optionally use Krew to install the netobserv CLI plugin. For more information, see "Installing a CLI plugin with Krew". Prerequisites You must install the OpenShift CLI ( oc ). You must have a macOS or Linux operating system. Procedure Download the oc netobserv file that corresponds with your architecture. For example, for the amd64 archive: USD curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64 Make the file executable: USD chmod +x ./oc-netobserv-amd64 Move the extracted netobserv-cli binary to a directory that is on your PATH , such as /usr/local/bin/ : USD sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv Verification Verify that oc netobserv is available: USD oc netobserv version Example output Netobserv CLI version <version> Additional resources Installing and using CLI plugins Installing a CLI plugin with Krew 12.2. Using the Network Observability CLI You can visualize and filter the flows and packets data directly in the terminal to see specific usage, such as identifying who is using a specific port. The Network Observability CLI collects flows as JSON and database files or packets as a PCAP file, which you can use with third-party tools. 12.2.1. Capturing flows You can capture flows and filter on any resource or zone in the data to solve use cases, such as displaying Round-Trip Time (RTT) between two zones. Table visualization in the CLI provides viewing and flow search capabilities. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture flows with filters enabled by running the following command: USD oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to further refine the incoming flows. For example: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . The data that was captured is written to two separate files in an ./output directory located in the same path used to install the CLI. View the captured data in the ./output/flow/<capture_date_time>.json JSON file, which contains JSON arrays of the captured data. Example JSON file { "AgentIP": "10.0.1.76", "Bytes": 561, "DnsErrno": 0, "Dscp": 20, "DstAddr": "f904:ece9:ba63:6ac7:8018:1e5:7130:0", "DstMac": "0A:58:0A:80:00:37", "DstPort": 9999, "Duplicate": false, "Etype": 2048, "Flags": 16, "FlowDirection": 0, "IfDirection": 0, "Interface": "ens5", "K8S_FlowLayer": "infra", "Packets": 1, "Proto": 6, "SrcAddr": "3e06:6c10:6440:2:a80:37:b756:270f", "SrcMac": "0A:58:0A:80:00:01", "SrcPort": 46934, "TimeFlowEndMs": 1709741962111, "TimeFlowRttNs": 121000, "TimeFlowStartMs": 1709741962111, "TimeReceived": 1709741964 } You can use SQLite to inspect the ./output/flow/<capture_date_time>.db database file. For example: Open the file by running the following command: USD sqlite3 ./output/flow/<capture_date_time>.db Query the data by running a SQLite SELECT statement, for example: sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10; Example output 12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1 12.2.2. Capturing packets You can capture packets using the Network Observability CLI. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Run the packet capture with filters enabled: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Add filters to the live table filter prompt in the terminal to refine the incoming packets. An example filter is as follows: live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once Use the PageUp and PageDown keys to toggle between None , Resource , Zone , Host , Owner and all of the above . To stop capturing, press Ctrl + C . View the captured data, which is written to a single file in an ./output/pcap directory located in the same path that was used to install the CLI: The ./output/pcap/<capture_date_time>.pcap file can be opened with Wireshark. 12.2.3. Capturing metrics You can generate on-demand dashboards in Prometheus by using a service monitor for Network Observability. Prerequisites Install the OpenShift CLI ( oc ). Install the Network Observability CLI ( oc netobserv ) plugin. Procedure Capture metrics with filters enabled by running the following command: Example output USD oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051 Open the link provided in the terminal to view the NetObserv / On-Demand dashboard: Example URL https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli Note Features that are not enabled present as empty graphs. 12.2.4. Cleaning the Network Observability CLI You can manually clean the CLI workload by running oc netobserv cleanup . This command removes all the CLI components from your cluster. When you end a capture, this command is run automatically by the client. You might be required to manually run it if you experience connectivity issues. Procedure Run the following command: USD oc netobserv cleanup Additional resources Network Observability CLI reference 12.3. Network Observability CLI (oc netobserv) reference The Network Observability CLI ( oc netobserv ) has most features and filtering options that are available for the Network Observability Operator. You can pass command line arguments to enable features or filtering options. 12.3.1. Network Observability CLI usage You can use the Network Observability CLI ( oc netobserv ) to pass command line arguments to capture flows data, packets data, and metrics for further analysis and enable features supported by the Network Observability Operator. 12.3.1.1. Syntax The basic syntax for oc netobserv commands: oc netobserv syntax USD oc netobserv [<command>] [<feature_option>] [<command_options>] 1 1 1 Feature options can only be used with the oc netobserv flows command. They cannot be used with the oc netobserv packets command. 12.3.1.2. Basic commands Table 12.1. Basic commands Command Description flows Capture flows information. For subcommands, see the "Flows capture options" table. packets Capture packets data. For subcommands, see the "Packets capture options" table. metrics Capture metrics data. For subcommands, see the "Metrics capture options" table. follow Follow collector logs when running in background. stop Stop collection by removing agent daemonset. copy Copy collector generated files locally. cleanup Remove the Network Observability CLI components. version Print the software version. help Show help. 12.3.1.3. Flows capture options Flows capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering. oc netobserv flows syntax USD oc netobserv flows [<feature_option>] [<command_options>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled: USD oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.4. Packets capture options You can filter packets capture data the as same as flows capture by using the filters. Certain features, such as packets drop, DNS, RTT, and network events, are only available for flows and metrics capture. oc netobserv packets syntax USD oc netobserv packets [<option>] Option Description Default --background run in background false --copy copy the output files locally prompt --log-level components logs info --max-time maximum capture time 5m --max-bytes maximum capture bytes 50000000 = 50MB --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - Example running packets capture on TCP protocol and port 49051: USD oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 12.3.1.5. Metrics capture options You can enable features and use filters on metrics capture, the same as flows capture. The generated graphs fill accordingly in the dashboard. oc netobserv metrics syntax USD oc netobserv metrics [<option>] Option Description Default --enable_all enable all eBPF features false --enable_dns enable DNS tracking false --enable_network_events enable network events monitoring false --enable_pkt_translation enable packet translation false --enable_pkt_drop enable packet drop false --enable_rtt enable RTT tracking false --enable_udn_mapping enable User Defined Network mapping false --get-subnets get subnets information false --action filter action Accept --cidr filter CIDR 0.0.0.0/0 --direction filter direction - --dport filter destination port - --dport_range filter destination port range - --dports filter on either of two destination ports - --drops filter flows with only dropped packets false --icmp_code filter ICMP code - --icmp_type filter ICMP type - --node-selector capture on specific nodes - --peer_ip filter peer IP - --peer_cidr filter peer CIDR - --port_range filter port range - --port filter port - --ports filter on either of two ports - --protocol filter protocol - --regexes filter flows using regular expression - --sport_range filter source port range - --sport filter source port - --sports filter on either of two source ports - --tcp_flags filter TCP flags - --interfaces interfaces to monitor - Example running metrics capture for TCP drops USD oc netobserv metrics --enable_pkt_drop --protocol=TCP
[ "curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64", "chmod +x ./oc-netobserv-amd64", "sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv", "oc netobserv version", "Netobserv CLI version <version>", "oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }", "sqlite3 ./output/flow/<capture_date_time>.db", "sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;", "12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once", "oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli", "oc netobserv cleanup", "oc netobserv [<command>] [<feature_option>] [<command_options>] 1", "oc netobserv flows [<feature_option>] [<command_options>]", "oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv packets [<option>]", "oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051", "oc netobserv metrics [<option>]", "oc netobserv metrics --enable_pkt_drop --protocol=TCP" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_observability/network-observability-cli-1
Chapter 5. File Systems
Chapter 5. File Systems Support of Btrfs File System The Btrfs (B-Tree) file system is supported as a Technology Preview in Red Hat Enterprise Linux 7.1. This file system offers advanced management, reliability, and scalability features. It enables users to create snapshots, it enables compression and integrated device management. OverlayFS The OverlayFS file system service allows the user to "overlay" one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This can be useful because it allows multiple users to share a file-system image, for example containers, or when the base image is on read-only media, for example a DVD-ROM. In Red Hat Enterprise Linux 7.1, OverlayFS is supported as a Technology Preview. There are currently two restrictions: It is recommended to use ext4 as the lower file system; the use of xfs and gfs2 file systems is not supported. SELinux is not supported, and to use OverlayFS, it is required to disable enforcing mode. Support of Parallel NFS Parallel NFS (pNFS) is a part of the NFS v4.1 standard that allows clients to access storage devices directly and in parallel. The pNFS architecture can improve the scalability and performance of NFS servers for several common workloads. pNFS defines three different storage protocols or layouts: files, objects, and blocks. The client supports the files layout, and since Red Hat Enterprise Linux 7.1, the blocks and object layouts are fully supported. Red Hat continues to work with partners and open source projects to qualify new pNFS layout types and to provide full support for more layout types in the future. For more information on pNFS, refer to http://www.pnfs.com/ .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-file_systems
Chapter 3. Getting information about your clusters and data
Chapter 3. Getting information about your clusters and data In the Cluster information page of cost management, you can view information like the status of your integration, the time of data retrieval, and links to each integration. You can also pause and resume integrations. 3.1. Getting information about your cluster Navigate to cost management > OpenShift . In the Group by drop-down, select Cluster . Select the cluster that you want to view. An OpenShift cluster details page opens. At the top of the page, click the hyperlink Cluster information . The Cluster information page provides the following details: Cluster ID The cost management operator version and if any updates are available The Red Hat integration (the integration for your cluster) Your Cloud integration If you are running a cluster on-premise or if you did not add a cloud integration for your cluster, you won't see a value in Cloud integration . 3.2. Getting information about your data Navigate to cost management > OpenShift . In the Group by drop-down, select Cluster . Select the cluster that you want to view. An OpenShift cluster details page opens. At the top of the page, click the hyperlink Data details . There are three sections that give you details about your cloud data, cluster data, and about cost management data: Cloud integration status or Red Hat integration status Provides a link to your integrations. Data availability For Cloud data , a timestamp refers to the last time that cost management checked for an available report. Data retrieval For Cloud data , a timestamp refers to when cost management retrieved your data from the cloud provider. For Cluster data , a timestamp refers to when cost management retrieved your data from the ingress service that the operator uploads it to. Data processing This timestamp refers to when cost management unpacked the reports, put them in the database, and made them available with the API. Data integration and finalization For cost management data , a timestamp refers to when cost management correlated the raw billing data from your cloud with your cluster metrics, and then applied any cost model rates against your metrics. 3.3. Pausing or resuming an integration In the Integrations section of console.redhat.com Settings , choose an integration that you want to pause or resume. On the row of the integration that you want to pause or resume, click the more options menu . Verify that Cloud integration status has either a pause icon or a green checkmark. It can take a couple seconds for it to load.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/analyzing_your_cost_data/cluster-information
Chapter 8. About the installer inventory file
Chapter 8. About the installer inventory file Red Hat Ansible Automation Platform works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario and describe host deployments to Ansible. By using an inventory file, Ansible can manage a large number of hosts with a single command. Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify. The inventory file can be in one of many formats, depending on the inventory plugins that you have. The most common formats are INI and YAML . Inventory files listed in this document are shown in INI format. The location of the inventory file depends on the installer you used. The following table shows possible locations: Installer Location Bundle tar /ansible-automation-platform-setup-bundle-<latest-version> Non-bundle tar /ansible-automation-platform-setup-<latest-version> RPM /opt/ansible-automation-platform/installer You can verify the hosts in your inventory using the command: ansible all -i <path-to-inventory-file. --list-hosts Example inventory file The first part of the inventory file specifies the hosts or groups that Ansible can work with. 8.1. Guidelines for hosts and groups Databases When using an external database, ensure the [database] sections of your inventory file are properly set up. To improve performance, do not colocate the database and the automation controller on the same server. Automation hub If there is an [automationhub] group, you must include the variables automationhub_pg_host and automationhub_pg_port . Add Ansible automation hub information in the [automationhub] group. Do not install Ansible automation hub and automation controller on the same node. Provide a reachable IP address or fully qualified domain name (FQDN) for the [automationhub] and [automationcontroller] hosts to ensure that users can synchronize and install content from Ansible automation hub and automation controller from a different node. The FQDN must not contain the _ symbol, as it will not be processed correctly in Skopeo. You may use the - symbol, as long as it is not at the start or the end of the host name. Do not use localhost . Private automation hub Do not install private automation hub and automation controller on the same node. You can use the same PostgreSQL (database) instance, but they must use a different (database) name. If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, it can result in an installation you cannot use as a container registry without certificate issues. Important You must separate the installation of automation controller and Ansible automation hub because the [database] group does not distinguish between the two if both are installed at the same time. If you use one value in [database] and both automation controller and Ansible automation hub define it, they would use the same database. Automation controller Automation controller does not configure replication or failover for the database that it uses. automation controller works with any replication that you have. Event-Driven Ansible controller Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller. Clustered installations When upgrading an existing cluster, you can also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file is not enough to remove them from the cluster. In addition to omitting instances or instance groups from the inventory file, you must also deprovision instances or instance groups before starting the upgrade. For more information, see Deprovisioning nodes or groups . Otherwise, omitted instances or instance groups continue to communicate with the cluster, which can cause issues with automation controller services during the upgrade. If you are creating a clustered installation setup, you must replace [localhost] with the hostname or IP address of all instances. Installers for automation controller and automation hub do not accept [localhost] All nodes and instances must be able to reach any others by using this hostname or address. You cannot use the localhost ansible_connection=local on one of the nodes. Use the same format for the host names of all the nodes. Therefore, this does not work: [automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4 Instead, use these formats: [automationhub] hostA hostB hostC or [automationhub] hostA.example.com hostB.example.com hostC.example.com 8.2. Deprovisioning nodes or groups You can deprovision nodes and instance groups using the Ansible Automation Platform installer. Running the installer will remove all configuration files and logs attached to the nodes in the group. Note You can deprovision any hosts in your inventory except for the first host specified in the [automationcontroller] group. To deprovision nodes, append node_state=deprovision to the node or group within the inventory file. For example: To remove a single node from a deployment: [automationcontroller] host1.example.com host2.example.com host4.example.com node_state=deprovision or To remove an entire instance group from a deployment: [instance_group_restrictedzone] host4.example.com host5.example.com [instance_group_restrictedzone:vars] node_state=deprovision 8.3. Inventory variables The second part of the example inventory file, following [all:vars] , is a list of variables used by the installer. Using all means the variables apply to all hosts. To apply variables to a particular host, use [hostname:vars] . For example, [automationhub:vars] . 8.4. Rules for declaring variables in inventory files The values of string variables are declared in quotes. For example: pg_database='awx' pg_username='awx' pg_password='<password>' When declared in a :vars section, INI values are interpreted as strings. For example, var=FALSE creates a string equal to FALSE . Unlike host lines, :vars sections accept only a single entry per line, so everything after the = must be the value for the entry. Host lines accept multiple key=value parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator. Values that contain whitespace can be quoted (single or double). For more information, see Python shlex parsing rules . If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables. Note Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly. If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks). For example, to use mypasswordwith#hashsigns as a value for the variable pg_password , declare it as pg_password='"mypasswordwith#hashsigns"' in the Ansible host inventory file. 8.5. Securing secrets in the inventory file You can encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names and the variable values makes it hard to find the source of the values. To circumvent this, you can encrypt the variables individually by using ansible-vault encrypt_string , or encrypt a file containing the variables. Procedure Create a file labeled credentials.yml to store the encrypted credentials. USD cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pw Encrypt the credentials.yml file using ansible-vault . USD ansible-vault encrypt credentials.yml New Vault password: Confirm New Vault password: Encryption successful Important Store your encrypted vault password in a safe place. Verify that the credentials.yml file is encrypted. USD cat credentials.yml USDANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763 Run setup.sh for installation of Ansible Automation Platform 2.4 and pass both credentials.yml and the --ask-vault-pass option . USD ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass 8.6. Additional inventory file variables You can further configure your Red Hat Ansible Automation Platform installation by including additional variables in the inventory file. These configurations add optional features for managing your Red Hat Ansible Automation Platform. Add these variables by editing the inventory file using a text editor. A table of predefined values for inventory file variables can be found in Inventory file variables in the Red Hat Ansible Automation Platform Installation Guide .
[ "ansible all -i <path-to-inventory-file. --list-hosts", "[automationcontroller] host1.example.com host2.example.com Host4.example.com [automationhub] host3.example.com [database] Host5.example.com [all:vars] admin_password='<password>' pg_host='' pg_port='' pg_database='awx' pg_username='awx' pg_password='<password>' registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>'", "[automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4", "[automationhub] hostA hostB hostC", "[automationhub] hostA.example.com hostB.example.com hostC.example.com", "[automationcontroller] host1.example.com host2.example.com host4.example.com node_state=deprovision", "[instance_group_restrictedzone] host4.example.com host5.example.com [instance_group_restrictedzone:vars] node_state=deprovision", "pg_database='awx' pg_username='awx' pg_password='<password>'", "cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pw", "ansible-vault encrypt credentials.yml New Vault password: Confirm New Vault password: Encryption successful", "cat credentials.yml USDANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763", "ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_planning_guide/about_the_installer_inventory_file
Part III. Integrating Red Hat Process Automation Manager with Red Hat Single Sign-On
Part III. Integrating Red Hat Process Automation Manager with Red Hat Single Sign-On As a system administrator, you can integrate Red Hat Single Sign-On with Red Hat Process Automation Manager to secure your Red Hat Process Automation Manager browser applications with a single authentication method. Prerequisites Red Hat Process Automation Manager is installed on Red Hat JBoss EAP 7.4. For information, see Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/assembly-integrating-sso
Chapter 2. Installing extension pack for Apache Camel by Red Hat
Chapter 2. Installing extension pack for Apache Camel by Red Hat Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . This section explains how to install Extension Pack for Apache Camel by Red Hat. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel . Select the Extension Pack for Apache Camel by Red Hat option from the search results and then click Install. This installs the extension pack which includes extensions for Apache Camel in the VS Code editor.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/tooling_guide/installing-camel-extension-pack
Federate with Identity Service
Federate with Identity Service Red Hat OpenStack Platform 16.0 Federate with Identity Service using Red Hat Single Sign-On OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/federate_with_identity_service/index
Chapter 40. Authentication and Interoperability
Chapter 40. Authentication and Interoperability bind-dyndb-ldap component, BZ# 1139776 The latest version of the bind-dyndb-ldap system plug-in offers significant improvements over the versions, but currently has some limitations. One of the limitations is missing support for the LDAP rename (MODRDN) operation. As a consequence, DNS records renamed in LDAP are not served correctly. To work around this problem, restart the named daemon to resynchronize data after each MODRDN operation. In an Identity Management (IdM) cluster, restart the named daemon on all IdM replicas. ipa component, BZ# 1187524 The userRoot.ldif and ipaca.ldif files, from which Identity Management (IdM) reimports the back end when restoring from backup, cannot be opened during a full-server restore even though they are present in the tar archive containing the IdM backup. Consequently, these files are skipped during the full-server restore. If you restore from a full-server backup, the restored back end can receive some updates from after the backup was created. This is not expected because all updates received between the time the backup was created and the time the restore is performed should be lost. The server is successfully restored, but can contain invalid data. If the restored server containing invalid data is then used to reinitialize a replica, the replica reinitialization succeeds, but the data on the replica is invalid. No workaround is currently available. It is recommended that you do not use a server restored from a full-server IdM backup to reinitialize a replica, which ensures that no unexpected updates are present at the end of the restore and reinitialization process. Note that this known issue relates only to the full-server IdM restore, not to the data-only IdM restore. ipa (slapi-nis) component, BZ# 1157757 When the Schema Compatibility plug-in is configured to provide Active Directory (AD) users access to legacy clients using the Identity Management (IdM) cross-forest trust to AD, the 389 Directory Server can under certain conditions increase CPU consumption upon receiving a request to resolve complex group membership of an AD user. ipa component, BZ# 1186352 When you restore an Identity Management (IdM) server from backup and re-initalize the restored data to other replicas, the Schema Compatibility plug-in can still maintain a cache of the old data from before performing the restore and re-initialization. Consequently, the replicas might behave unexpectedly. For example, if you attempt to add a user that was originally added after performing the backup, and thus removed during the restore and re-initialization steps, the operation might fail with an error, because the Schema Compatibility cache contains a conflicting user entry. To work around this problem, restart the IdM replicas after re-intializing them from the master server. This clears the Schema Compatibility cache and ensures that the replicas behave as expected in the described situation. ipa component, BZ# 1188195 Both anonymous and authenticated users lose the default permission to read the facsimiletelephonenumber user attribute after upgrading to the Red Hat Enterprise Linux 7.1 version of Identity Management (IdM). To manually change the new default setting and make the attribute readable again, run the following command: ipa component, BZ# 1189034 The ipa host-del --updatedns command does not update the host DNS records if the DNS zone of the host is not fully qualified. Creating unqualified zones was possible in Red Hat Enterprise Linux 7.0 and 6. If you execute ipa host-del --updatedns on an unqualified DNS zone, for example, example.test instead of the fully qualified example.test. with the dot (.) at the end, the command fails with an internal error and deletes the host but not its DNS records. To work around this problem, execute ipa host-del --updatedns command on an IdM server running Red Hat Enterprise Linux 7.0 or 6, where updating the host DNS records works as expected, or update the host DNS records manually after running the command on Red Hat Enterprise Linux 7.1. ipa component, BZ# 1193578 Kerberos libraries on Identity Management (IdM) clients communicate by default over the User Datagram Protocol (UDP). Using a one-time password (OTP) can cause additional delay and breach of Kerberos timeouts. As a consequence, the kinit command and other Kerberos operations can report communication errors, and the user can get locked out. To work around this problem, make communication using the slightly slower Transmission Control Protocol (TCP) default by setting the udp_preference_limit option to 0 in the /etc/krb5.conf file. ipa component, BZ# 1170770 Hosts enrolled to IdM cannot belong to the same DNS domains as the DNS domains belonging to an AD forest. When any of the DNS domains in an Active Directory (AD) forest are marked as belonging to the Identity Management (IdM) realm, cross-forest trust with AD does not work even though the trust status reports success. To work around this problem, use DNS domains separate from an existing AD forest to deploy IdM. If you are already using the same DNS domains for both AD and IdM, first run the ipa realmdomains-show command to display the list of IdM realm domains. Then remove the DNS domains belonging to AD from the list by running the ipa realmdomains-mod --del-domain= wrong.domain command. Un-enroll the hosts from the AD forest DNS domains from IdM, and choose DNS names that are not in conflict with the AD forest DNS domains for these hosts. Finally, refresh the status of the cross-forest trust to the AD forest by reestablishing the trust with the ipa trust-add command. ipa component, BZ# 988473 Access control to Lightweight Directory Access Protocol (LDAP) objects representing trust with Active Directory (AD) is given to the Trusted Admins group in Identity Management (IdM). In order to establish the trust, the IdM administrator should belong to a group which is a member of the Trusted Admins group and this group should have relative identifier (RID) 512 assigned. To ensure this, run the ipa-adtrust-install command and then the ipa group-show admins --all command to verify that the ipantsecurityidentifier field contains a value ending with the -512 string. If the field does not end with -512 , use the ipa group-mod admins --setattr=ipantsecurityidentifier= SID command, where SID is the value of the field from the ipa group-show admins --all command output with the last component value (-XXXX) replaced by the -512 string. sssd component, BZ# 1024744 The OpenLDAP server and the 389 Directory Server (389 DS) treat grace logins differently. 389 DS treats them as the number of grace logins left , while OpenLDAP treats them as the number of grace logins used . Currently, SSSD only handles the semantics used by 389 DS. As a result, when using OpenLDAP, the grace password warning can be incorrect. sssd component, BZ# 1081046 The accountExpires attribute that SSSD uses to see whether an account has expired is not replicated to the global catalog by default. As a result, users with expired accounts can be allowed to log in when using GSSAPI authentication. To work around this problem, the global catalog support can be disabled by specifying ad_enable_gc=False in the sssd.conf file. With this setting, users with expired accounts will be denied access when using GSSAPI authentication. Note that SSSD connects to each LDAP server individually in this scenario, which can increase the connection count. sssd component, BZ# 1103249 Under certain circumstances, the algorithm in the Privilege Attribute Certificate (PAC) responder component of the SSSD service does not effectively handle users who are members of a large number of groups. As a consequence, logging from Windows clients to Red Hat Enterprise Linux clients with Kerberos single sign-on (SSO) can be noticeably slow. There is currently no known workaround available. sssd component, BZ# 1194345 The SSSD service uses the global catalog (GC) for initgroup lookups but the POSIX attributes, such as the user home directory or shell, are not replicated to the GC set by default. Consequently, when SSSD requests the POSIX attributes during SSSD lookups, SSSD incorrectly considers the attributes to be removed from the server, because they are not present in the GC, and removes them from the SSSD cache as well. To work around this problem, either disable the GC support by setting the ad_enable_gc=False parameter in the sssd-ad.conf file, or replicate the POSIX attributes to the GC. Disabling the GC support is easier but results in the client being unable to resolve cross-domain group memberships. Replicating POSIX attributes to the GC is a more systematic solution but requires changing the Active Directory (AD) schema. As a result of either one of the aforementioned workarounds, running the getent passwd user command shows the POSIX attributes. Note that running the id user command might not show the POSIX attributes even if they are set properly. samba component, BZ# 1186403 Binaries in the samba-common.x86_64 and samba-common.i686 packages contain the same file paths but differ in their contents. As a consequence, the packages cannot be installed together, because the RPM database forbids this scenario. To work around this problem, do not install samba-common.i686 if you primarily need samba-common.x86_64 ; neither in a kickstart file, nor on an already installed system. If you need samba-common.i686 , avoid samba-common.x86_64 . As a result, the system can be installed, but with only one architecture of the samba-common package at a time.
[ "ipa permission-mod 'System: Read User Addressbook Attributes' --includedattrs facsimiletelephonenumber" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/known-issues-authentication_and_interoperability
B.4. Extract a Self-signed Certificate from the Keystore
B.4. Extract a Self-signed Certificate from the Keystore Procedure B.2. Extract a Self-signed Certificate from the Keystore Run the keytool -export -alias ALIAS -keystore server.keystore -rfc -file public.cert command: Enter the keystore password when prompted: Result This creates the public.cert file that contains a certificate signed with the private key in the server.keystore .
[ "keytool -export -alias teiid -keystore server.keystore -rfc -file public.cert", "Enter keystore password: <password>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/extract_a_self-signed_certificate_from_the_keystore1
17.4.6. Your Printer Does Not Work
17.4.6. Your Printer Does Not Work If you are not sure how to set up your printer or are having trouble getting it to work properly, try using the Printer Configuration Tool . Type the system-config-printer command at a shell prompt to launch the Printer Configuration Tool . If you are not root, it prompts you for the root password to continue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch17s04s06
Index
Index A advantages Python pretty-printers debugging, Python Pretty-Printers Akonadi KDE Development Framework libraries and runtime support, KDE4 Architecture architecture, KDE4 KDE Development Framework libraries and runtime support, KDE4 Architecture Autotools compiling and building, Autotools B backtrace tools GNU debugger, Simple GDB Boost libraries and runtime support, Boost boost-doc Boost libraries and runtime support, Additional Information breakpoint fundamentals GNU debugger, Simple GDB breakpoints (conditional) GNU debugger, Conditional Breakpoints build-id compiling and building, build-id Unique Identification of Binaries building compiling and building, Compiling and Building C C++ Standard Library, GNU libraries and runtime support, The GNU C++ Standard Library cachegrind tools Valgrind, Valgrind Tools callgrind tools Valgrind, Valgrind Tools Collaborating, Collaborating commands fundamentals GNU debugger, Simple GDB profiling Valgrind, Valgrind Tools tools Performance Counters for Linux (PCL) and perf, Perf Tool Commands commonly-used commands Autotools compiling and building, Autotools compatibility libraries and runtime support, Compatibility compiling a C Hello World program usage GCC, Simple C Usage compiling a C++ Hello World program usage GCC, Simple C++ Usage compiling and building Autotools, Autotools commonly-used commands, Autotools configuration script, Configuration Script documentation, Autotools Documentation plug-in for Eclipse, Autotools Plug-in for Eclipse templates (supported), Autotools Plug-in for Eclipse build-id, build-id Unique Identification of Binaries GNU Compiler Collection, GNU Compiler Collection (GCC) documentation, GCC Documentation required packages, Running GCC usage, Running GCC introduction, Compiling and Building conditional breakpoints GNU debugger, Conditional Breakpoints configuration script Autotools compiling and building, Configuration Script continue tools GNU debugger, Simple GDB D debugfs file system profiling ftrace, ftrace debugging debuginfo-packages, Installing Debuginfo Packages installation, Installing Debuginfo Packages GNU debugger, GDB fundamental mechanisms, GDB GDB, GDB requirements, GDB introduction, Debugging Python pretty-printers, Python Pretty-Printers advantages, Python Pretty-Printers debugging output (formatted), Python Pretty-Printers documentation, Python Pretty-Printers pretty-printers, Python Pretty-Printers variable tracking at assignments (VTA), Variable Tracking at Assignments debugging a Hello World program usage GNU debugger, Running GDB debugging output (formatted) Python pretty-printers debugging, Python Pretty-Printers debuginfo-packages debugging, Installing Debuginfo Packages documentation Autotools compiling and building, Autotools Documentation Boost libraries and runtime support, Additional Information GNU C++ Standard Library libraries and runtime support, Additional information GNU Compiler Collection compiling and building, GCC Documentation Java libraries and runtime support, Java Documentation KDE Development Framework libraries and runtime support, kdelibs Documentation OProfile profiling, OProfile Documentation Perl libraries and runtime support, Perl Documentation profiling ftrace, ftrace Documentation Python libraries and runtime support, Python Documentation Python pretty-printers debugging, Python Pretty-Printers Qt libraries and runtime support, Qt Library Documentation Ruby libraries and runtime support, Ruby Documentation SystemTap profiling, Additional Information Valgrind profiling, Additional information Documentation Doxygen, Doxygen Docment sources, Documenting the Sources Getting Started, Getting Started Resources, Resources Running Doxygen, Running Doxygen Supported output and languages, Doxygen Supported Output and Languages Documentation Tools, Documentation Tools Doxygen Documentation, Doxygen document sources, Documenting the Sources Getting Started, Getting Started Resources, Resources Running Doxygen, Running Doxygen Supported output and languages, Doxygen Supported Output and Languages E execution (forked) GNU debugger, Forked Execution F finish tools GNU debugger, Simple GDB forked execution GNU debugger, Forked Execution formatted debugging output Python pretty-printers debugging, Python Pretty-Printers framework (ftrace) profiling ftrace, ftrace ftrace profiling, ftrace debugfs file system, ftrace documentation, ftrace Documentation framework (ftrace), ftrace usage, Using ftrace function tracer profiling ftrace, ftrace fundamental commands fundamentals GNU debugger, Simple GDB fundamental mechanisms GNU debugger debugging, GDB fundamentals GNU debugger, Simple GDB G gcc GNU Compiler Collection compiling and building, GNU Compiler Collection (GCC) GCC C usage compiling a C Hello World program, Simple C Usage GCC C++ usage compiling a C++ Hello World program, Simple C++ Usage GDB GNU debugger debugging, GDB Git configuration, Installing and Configuring Git documentation, Additional Resources installation, Installing and Configuring Git overview, Git usage, Creating a New Repository GNOME Power Manager libraries and runtime support, GNOME Power Manager gnome-power-manager GNOME Power Manager libraries and runtime support, GNOME Power Manager GNU C++ Standard Library libraries and runtime support, The GNU C++ Standard Library GNU Compiler Collection compiling and building, GNU Compiler Collection (GCC) GNU debugger conditional breakpoints, Conditional Breakpoints debugging, GDB execution (forked), Forked Execution forked execution, Forked Execution fundamentals, Simple GDB breakpoint, Simple GDB commands, Simple GDB halting an executable, Simple GDB inspecting the state of an executable, Simple GDB starting an executable, Simple GDB interfaces (CLI and machine), Alternative User Interfaces for GDB thread and threaded debugging, Debugging Individual Threads tools, Simple GDB backtrace, Simple GDB continue, Simple GDB finish, Simple GDB help, Simple GDB list, Simple GDB , Simple GDB print, Simple GDB quit, Simple GDB step, Simple GDB usage, Running GDB debugging a Hello World program, Running GDB variations and environments, Alternative User Interfaces for GDB H halting an executable fundamentals GNU debugger, Simple GDB helgrind tools Valgrind, Valgrind Tools help tools GNU debugger, Simple GDB I inspecting the state of an executable fundamentals GNU debugger, Simple GDB installation debuginfo-packages debugging, Installing Debuginfo Packages interfaces (CLI and machine) GNU debugger, Alternative User Interfaces for GDB introduction compiling and building, Compiling and Building debugging, Debugging libraries and runtime support, Libraries and Runtime Support profiling, Profiling SystemTap, SystemTap ISO 14482 Standard C++ library GNU C++ Standard Library libraries and runtime support, The GNU C++ Standard Library J Java libraries and runtime support, Java K KDE Development Framework libraries and runtime support, KDE Development Framework KDE4 architecture KDE Development Framework libraries and runtime support, KDE4 Architecture kdelibs-devel KDE Development Framework libraries and runtime support, KDE Development Framework kernel information packages profiling SystemTap, SystemTap KHTML KDE Development Framework libraries and runtime support, KDE4 Architecture KIO KDE Development Framework libraries and runtime support, KDE4 Architecture KJS KDE Development Framework libraries and runtime support, KDE4 Architecture KNewStuff2 KDE Development Framework libraries and runtime support, KDE4 Architecture KXMLGUI KDE Development Framework libraries and runtime support, KDE4 Architecture L libraries runtime support, Libraries and Runtime Support libraries and runtime support Boost, Boost boost-doc, Additional Information documentation, Additional Information message passing interface (MPI), Boost meta-package, Boost C++ Standard Library, GNU, The GNU C++ Standard Library compatibility, Compatibility GNOME Power Manager, GNOME Power Manager gnome-power-manager, GNOME Power Manager GNU C++ Standard Library, The GNU C++ Standard Library documentation, Additional information ISO 14482 Standard C++ library, The GNU C++ Standard Library libstdc++-devel, The GNU C++ Standard Library libstdc++-docs, Additional information Standard Template Library, The GNU C++ Standard Library introduction, Libraries and Runtime Support Java, Java documentation, Java Documentation KDE Development Framework, KDE Development Framework Akonadi, KDE4 Architecture documentation, kdelibs Documentation KDE4 architecture, KDE4 Architecture kdelibs-devel, KDE Development Framework KHTML, KDE4 Architecture KIO, KDE4 Architecture KJS, KDE4 Architecture KNewStuff2, KDE4 Architecture KXMLGUI, KDE4 Architecture Phonon, KDE4 Architecture Plasma, KDE4 Architecture Solid, KDE4 Architecture Sonnet, KDE4 Architecture Strigi, KDE4 Architecture Telepathy, KDE4 Architecture libstdc++, The GNU C++ Standard Library Perl, Perl documentation, Perl Documentation module installation, Installation updates, Perl Updates Python, Python documentation, Python Documentation updates, Python Updates Qt, Qt documentation, Qt Library Documentation meta object compiler (MOC), Qt Qt Creator, Qt Creator qt-doc, Qt Library Documentation updates, Qt Updates widget toolkit, Qt Ruby, Ruby documentation, Ruby Documentation ruby-devel, Ruby Library and Runtime Details NSS Shared Databases, NSS Shared Databases Backwards Compatibility, Backwards Compatibility Documentation, NSS Shared Databases Documentation libstdc++ libraries and runtime support, The GNU C++ Standard Library libstdc++-devel GNU C++ Standard Library libraries and runtime support, The GNU C++ Standard Library libstdc++-docs GNU C++ Standard Library libraries and runtime support, Additional information list tools GNU debugger, Simple GDB Performance Counters for Linux (PCL) and perf, Perf Tool Commands M machine interface GNU debugger, Alternative User Interfaces for GDB mallopt, mallopt massif tools Valgrind, Valgrind Tools mechanisms GNU debugger debugging, GDB memcheck tools Valgrind, Valgrind Tools message passing interface (MPI) Boost libraries and runtime support, Boost meta object compiler (MOC) Qt libraries and runtime support, Qt meta-package Boost libraries and runtime support, Boost module installation Perl libraries and runtime support, Installation N tools GNU debugger, Simple GDB NSS Shared Datagbases Library and Runtime Details, NSS Shared Databases Backwards Compatibility, Backwards Compatibility Documentation, NSS Shared Databases Documentation O OProfile profiling, OProfile documentation, OProfile Documentation usage, Using OProfile P perf profiling Performance Counters for Linux (PCL) and perf, Performance Counters for Linux (PCL) Tools and perf usage Performance Counters for Linux (PCL) and perf, Using Perf Performance Counters for Linux (PCL) and perf profiling, Performance Counters for Linux (PCL) Tools and perf subsystem (PCL), Performance Counters for Linux (PCL) Tools and perf tools, Perf Tool Commands commands, Perf Tool Commands list, Perf Tool Commands record, Perf Tool Commands report, Perf Tool Commands stat, Perf Tool Commands usage, Using Perf perf, Using Perf Perl libraries and runtime support, Perl Phonon KDE Development Framework libraries and runtime support, KDE4 Architecture Plasma KDE Development Framework libraries and runtime support, KDE4 Architecture plug-in for Eclipse Autotools compiling and building, Autotools Plug-in for Eclipse pretty-printers Python pretty-printers debugging, Python Pretty-Printers print tools GNU debugger, Simple GDB profiling conflict between perf and oprofile, Using Perf ftrace, ftrace introduction, Profiling OProfile, OProfile Performance Counters for Linux (PCL) and perf, Performance Counters for Linux (PCL) Tools and perf SystemTap, SystemTap Valgrind, Valgrind Python libraries and runtime support, Python Python pretty-printers debugging, Python Pretty-Printers Q Qt libraries and runtime support, Qt Qt Creator Qt libraries and runtime support, Qt Creator qt-doc Qt libraries and runtime support, Qt Library Documentation quit tools GNU debugger, Simple GDB R record tools Performance Counters for Linux (PCL) and perf, Perf Tool Commands report tools Performance Counters for Linux (PCL) and perf, Perf Tool Commands required packages GNU Compiler Collection compiling and building, Running GCC profiling SystemTap, SystemTap requirements GNU debugger debugging, GDB Revision control, Collaborating Ruby libraries and runtime support, Ruby ruby-devel Ruby libraries and runtime support, Ruby runtime support libraries, Libraries and Runtime Support S scripts (SystemTap scripts) profiling SystemTap, SystemTap Solid KDE Development Framework libraries and runtime support, KDE4 Architecture Sonnet KDE Development Framework libraries and runtime support, KDE4 Architecture Standard Template Library GNU C++ Standard Library libraries and runtime support, The GNU C++ Standard Library starting an executable fundamentals GNU debugger, Simple GDB stat tools Performance Counters for Linux (PCL) and perf, Perf Tool Commands step tools GNU debugger, Simple GDB Strigi KDE Development Framework libraries and runtime support, KDE4 Architecture subsystem (PCL) profiling Performance Counters for Linux (PCL) and perf, Performance Counters for Linux (PCL) Tools and perf supported templates Autotools compiling and building, Autotools Plug-in for Eclipse SystemTap profiling, SystemTap documentation, Additional Information introduction, SystemTap kernel information packages, SystemTap required packages, SystemTap scripts (SystemTap scripts), SystemTap T Telepathy KDE Development Framework libraries and runtime support, KDE4 Architecture templates (supported) Autotools compiling and building, Autotools Plug-in for Eclipse thread and threaded debugging GNU debugger, Debugging Individual Threads tools GNU debugger, Simple GDB Performance Counters for Linux (PCL) and perf, Perf Tool Commands profiling Valgrind, Valgrind Tools Valgrind, Valgrind Tools U updates Perl libraries and runtime support, Perl Updates Python libraries and runtime support, Python Updates Qt libraries and runtime support, Qt Updates usage GNU Compiler Collection compiling and building, Running GCC GNU debugger, Running GDB fundamentals, Simple GDB Performance Counters for Linux (PCL) and perf, Using Perf profiling ftrace, Using ftrace OProfile, Using OProfile Valgrind profiling, Using Valgrind V Valgrind profiling, Valgrind commands, Valgrind Tools documentation, Additional information tools, Valgrind Tools usage, Using Valgrind tools cachegrind, Valgrind Tools callgrind, Valgrind Tools helgrind, Valgrind Tools massif, Valgrind Tools memcheck, Valgrind Tools variable tracking at assignments (VTA) debugging, Variable Tracking at Assignments variations and environments GNU debugger, Alternative User Interfaces for GDB Version control, Collaborating W widget toolkit Qt libraries and runtime support, Qt
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/ix01
4.12.2. Exim
4.12.2. Exim Exim has been removed from Red Hat Enterprise Linux 6. Postfix is the default and recommended MTA.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-networking-mail-exim
Chapter 27. OpenShift SDN network plugin
Chapter 27. OpenShift SDN network plugin 27.1. About the OpenShift SDN network plugin Part of Red Hat OpenShift Networking, OpenShift SDN is a network plugin that uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by OpenShift SDN, which configures an overlay network using Open vSwitch (OVS). Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 27.1.1. OpenShift SDN network isolation modes OpenShift SDN provides three SDN modes for configuring the pod network: Network policy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.15. Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services. Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode. 27.1.2. Supported network plugin feature matrix Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins: Table 27.1. Default CNI network plugin feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall Supported Supported [1] Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption for intra-cluster communication Not supported Supported IPv4 single-stack Supported Supported IPv6 single-stack Not supported Supported [3] IPv4/IPv6 dual-stack Not Supported Supported [4] IPv6/IPv4 dual-stack Not supported Supported [5] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Hardware offloading Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 single-stack networking on a bare-metal platform. IPv4/IPv6 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), IBM Power(R), IBM Z(R), and RHOSP platforms. IPv6/IPv4 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), and IBM Power(R) platforms. 27.2. Configuring egress IPs for a project As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a project. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 27.2.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. An egress IP address is implemented as an additional IP address on the primary network interface of a node and must be in the same subnet as the primary IP address of the node. The additional IP address must not be assigned to any other node in the cluster. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 27.2.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) Yes Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes IBM Z(R) and IBM(R) LinuxONE Yes IBM Z(R) and IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM Yes IBM Power(R) Yes Nutanix Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ). 27.2.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.15. Note The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address> . Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port. Note When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 27.2.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 27.2.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 27.2.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 27.2.1.3. Limitations The following limitations apply when using egress IP addresses with the OpenShift SDN network plugin: You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment. You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation. If you need to share IP addresses across namespaces, the OVN-Kubernetes network plugin egress IP address implementation allows you to span IP addresses across multiple namespaces. Note If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1 and project2 are joined by running the oc adm pod-network join-projects --to=project1 project2 command, neither project can use an egress IP address. For more information, see BZ#1645577 . 27.2.1.4. IP address assignment approaches You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP address is associated with a project, OpenShift SDN allows you to assign egress IP addresses to hosts in two ways: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP address is assigned to a node. Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs parameter is set on a NetNamespace object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped. High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes. 27.2.1.4.1. Considerations when using automatically assigned egress IP addresses When using the automatic assignment approach for egress IP addresses the following considerations apply: You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets the egressIPs parameter of the HostSubnet resource based on the IP address range you specify. If the node hosting the namespace's egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes. 27.2.1.4.2. Considerations when using manually assigned egress IP addresses This approach allows you to control which nodes can host an egress IP address. Note If your cluster is installed on public cloud infrastructure, you must ensure that each node that you assign egress IP addresses to has sufficient spare capacity to host the IP addresses. For more information, see "Platform considerations" in a section. When using the manual assignment approach for egress IP addresses the following considerations apply: You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node. Multiple egress IP addresses per namespace are supported. If a namespace has multiple egress IP addresses and those addresses are hosted on multiple nodes, the following additional considerations apply: If a pod is on a node that is hosting an egress IP address, that pod always uses the egress IP address on the node. If a pod is not on a node that is hosting an egress IP address, that pod uses an egress IP address at random. 27.2.2. Configuring automatically assigned egress IP addresses for a namespace In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object with the egress IP address using the following JSON: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101: USD oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' USD oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}' Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressCIDRs": [ "<ip_address_range>", "<ip_address_range>" ] }' where: <node_name> Specifies a node name. <ip_address_range> Specifies an IP address range in CIDR format. You can specify more than one address range for the egressCIDRs array. For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255: USD oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' USD oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa. 27.2.3. Configuring manually assigned egress IP addresses for a namespace In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object by specifying the following JSON object with the desired IP addresses: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign the project1 project to the IP addresses 192.168.1.100 and 192.168.1.101 : USD oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' To provide high availability, set the egressIPs value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Manually assign the egress IP address to the node hosts. If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>", "<ip_address>" ] }' where: <node_name> Specifies a node name. <ip_address> Specifies an IP address. You can specify more than one IP address for the egressIPs array. For example, to specify that node1 should have the egress IPs 192.168.1.100 , 192.168.1.101 , and 192.168.1.102 : USD oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' In the example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. 27.2.4. Additional resources If you are configuring manual egress IP address assignment, see Platform considerations for information about IP capacity planning. 27.3. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 27.3.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Important You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects. Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 27.3.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressNetworkPolicy object. Important The creation of more than one EgressNetworkPolicy object is allowed, however it should not be done. When you create more than one EgressNetworkPolicy object, the following message is returned: dropping all rules . In actuality, all external traffic is dropped, which can cause security risks for your organization. A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project. The default project cannot use an egress firewall. When using the OpenShift SDN network plugin in multitenant mode, the following limitations apply: Global projects cannot use an egress firewall. You can make a project global by using the oc adm pod-network make-projects-global command. Projects merged by using the oc adm pod-network join-projects command cannot use an egress firewall in any of the joined projects. If you create a selectorless service and manually define endpoints or EndpointSlices that point to external IPs, traffic to the service IP might still be allowed, even if your EgressNetworkPolicy is configured to deny all egress traffic. This occurs because OpenShift SDN does not fully enforce egress network policies for these external endpoints. Consequently, this might result in unexpected access to external services. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 27.3.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 27.3.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes. Note Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. 27.3.2. EgressNetworkPolicy custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressNetworkPolicy CR object: EgressNetworkPolicy object apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ... 1 A name for your egress firewall policy. 2 A collection of one or more egress network policy rules as described in the following section. 27.3.2.1. EgressNetworkPolicy rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule. A value for either the cidrSelector field or the dnsName field for the rule. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A domain name. 27.3.2.2. Example EgressNetworkPolicy CR objects The following example defines several egress firewall policy rules: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. 27.3.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressNetworkPolicy object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressnetworkpolicy.network.openshift.io/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 27.4. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 27.4.1. Viewing an EgressNetworkPolicy object You can view an EgressNetworkPolicy object in your cluster. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command: USD oc get egressnetworkpolicy --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressnetworkpolicy <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 27.5. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 27.5.1. Editing an EgressNetworkPolicy object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace <filename> with the name of the file containing the updated EgressNetworkPolicy object. USD oc replace -f <filename>.yaml 27.6. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 27.6.1. Removing an EgressNetworkPolicy object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Enter the following command to delete the EgressNetworkPolicy object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressnetworkpolicy <name> 27.7. Considerations for the use of an egress router pod 27.7.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 27.7.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. In DNS proxy mode , an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode. 27.7.1.2. Egress router pod implementation The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables rules. After the initialization container finishes setting up the iptables rules, it exits. the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode. The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as the IP address for the gateway. Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION variable. If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector to identify which nodes are acceptable. 27.7.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 27.7.1.4. Failover configuration To avoid downtime, you can deploy an egress router pod with a Deployment resource, as in the following example. To create a new Service object for the example deployment, use the oc expose deployment/egress-demo-controller command. apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ... 1 Ensure that replicas is set to 1 , because only one pod can use a given egress source IP address at any time. This means that only a single copy of the router runs on a node. 2 Specify the Pod object template for the egress router pod. 27.7.2. Additional resources Deploying an egress router in redirection mode Deploying an egress router in HTTP proxy mode Deploying an egress router in DNS proxy mode 27.8. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses. 27.8.1. Egress router pod specification for redirect mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 External server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25 , with a source IP address of 192.168.12.99 . Example egress router pod specification apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 27.8.2. Egress destination configuration format When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: <port> <protocol> <ip_address> - Incoming connections to the given <port> should be redirected to the same port on the given <ip_address> . <protocol> is either tcp or udp . <port> <protocol> <ip_address> <remote_port> - As above, except that the connection is redirected to a different <remote_port> on <ip_address> . <ip_address> - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. In the example that follows several rules are defined: The first line redirects traffic from local port 80 to port 80 on 203.0.113.25 . The second and third lines redirect local ports 8080 and 8443 to remote ports 80 and 443 on 203.0.113.26 . The last line matches traffic for any ports not specified in the rules. Example configuration 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 27.8.3. Deploying an egress router pod in redirect mode In redirect mode , an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1 Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address. 27.8.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 27.9. Deploying an egress router pod in HTTP proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services. 27.9.1. Egress router pod specification for HTTP mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. 27.9.2. Egress destination configuration format When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: An IP address allows connections to that IP address, such as 192.168.1.1 . A CIDR range allows connections to that CIDR range, such as 192.168.1.0/24 . A hostname allows proxying to that host, such as www.example.com . A domain name preceded by *. allows proxying to that domain and all of its subdomains, such as *.example.com . A ! followed by any of the match expressions denies the connection instead. If the last line is * , then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. You can also use * to allow connections to all remote destinations. Example configuration !*.example.com !192.168.1.0/24 192.168.2.1 * 27.9.3. Deploying an egress router pod in HTTP proxy mode In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1 1 Ensure the http port is set to 8080 . To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the http_proxy or https_proxy variables: apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ... 1 The service created in the step. Note Using the http_proxy and https_proxy environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. 27.9.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 27.10. Deploying an egress router pod in DNS proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses. 27.10.1. Egress router pod specification for DNS mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 Specify a list of one or more proxy destinations. 5 Optional: Specify to output the DNS proxy log output to stdout . 27.10.2. Egress destination configuration format When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. An egress router pod supports the following formats for specifying port and destination mappings: Port and remote address You can specify a source port and a destination host by using the two field format: <port> <remote_address> . The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. Port and remote address pair example 80 172.16.12.11 100 example.com Port, remote address, and remote port You can specify a source port, a destination host, and a destination port by using the three field format: <port> <remote_address> <remote_port> . The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. Port, remote address, and remote port example 8080 192.168.60.252 80 8443 web.example.com 443 27.10.3. Deploying an egress router pod in DNS proxy mode In DNS proxy mode , an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. Create a service for the egress router pod: Create a file named egress-router-service.yaml that contains the following YAML. Set spec.ports to the list of ports that you defined previously for the EGRESS_DNS_PROXY_DESTINATION environment variable. apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy For example: apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy To create the service, enter the following command: USD oc create -f egress-router-service.yaml Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. 27.10.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 27.11. Configuring an egress router pod destination list from a config map As a cluster administrator, you can define a ConfigMap object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod. 27.11.1. Configuring an egress router destination mappings with a config map For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin privileges to edit the pod definition directly. Note The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file containing the mapping data for the egress router pod, as in the following example: You can put blank lines and comments into this file. Create a ConfigMap object from the file: USD oc delete configmap egress-routes --ignore-not-found USD oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt In the command, the egress-routes value is the name of the ConfigMap object to create and my-egress-destination.txt is the name of the file that the data is read from. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27 Create an egress router pod definition and specify the configMapKeyRef stanza for the EGRESS_DESTINATION field in the environment stanza: ... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ... 27.11.2. Additional resources Redirect mode HTTP proxy mode DNS proxy mode 27.12. Enabling multicast for a project Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. 27.12.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN network plugin, you can enable multicast on a per-project basis. When using the OpenShift SDN network plugin in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plugin in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. 27.12.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 27.13. Disabling multicast for a project 27.13.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled- 1 The namespace for the project you want to disable multicast for. 27.14. Configuring network isolation using OpenShift SDN Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN network plugin, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode. You can change the behavior of multitenant isolation for a project in two ways: You can join one or more projects, allowing network traffic between pods and services in different projects. You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects. 27.14.1. Prerequisites You must have a cluster configured to use the OpenShift SDN network plugin in multitenant isolation mode. 27.14.2. Joining projects You can join two or more projects to allow network traffic between pods and services in different projects. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Use the following command to join projects to an existing project network: USD oc adm pod-network join-projects --to=<project1> <project2> <project3> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. Optional: Run the following command to view the pod networks that you have joined together: USD oc get netnamespaces Projects in the same pod-network have the same network ID in the NETID column. 27.14.3. Isolating a project You can isolate a project so that pods and services in other projects cannot access its pods and services. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure To isolate the projects in the cluster, run the following command: USD oc adm pod-network isolate-projects <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 27.14.4. Disabling network isolation for a project You can disable network isolation for a project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command for the project: USD oc adm pod-network make-projects-global <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 27.15. Configuring kube-proxy The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services. 27.15.1. About iptables rules synchronization The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node. A sync begins when either of the following events occurs: An event occurs, such as service or endpoint is added to or removed from the cluster. The time since the last sync exceeds the sync period defined for kube-proxy. 27.15.2. kube-proxy configuration parameters You can modify the following kubeProxyConfig parameters. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. Table 27.2. Parameters Parameter Description Values Default iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package documentation. 30s proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. By default, a refresh starts as soon as a change that affects iptables rules occurs. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package 0s 27.15.3. Modifying the kube-proxy configuration You can modify the Kubernetes network proxy configuration for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to a running cluster with the cluster-admin role. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the kubeProxyConfig parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"] Save the file and exit the text editor. The syntax is validated by the oc command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message. Enter the following command to confirm the configuration update: USD oc get networks.operator.openshift.io -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change: USD oc get clusteroperator network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m The AVAILABLE field is True when the configuration update is applied successfully.
[ "IP capacity = public cloud default capacity - sum(current IP assignments)", "cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]", "cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]", "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'", "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressnetworkpolicy.network.openshift.io/v1 created", "oc get egressnetworkpolicy --all-namespaces", "oc describe egressnetworkpolicy <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressnetworkpolicy", "oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressnetworkpolicy", "oc delete -n <project> egressnetworkpolicy <name>", "curl <router_service_IP> <port>", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27", "curl <router_service_IP> <port>", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-", "!*.example.com !192.168.1.0/24 192.168.2.1 *", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"", "80 172.16.12.11 100 example.com", "8080 192.168.60.252 80 8443 web.example.com 443", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy", "oc create -f egress-router-service.yaml", "Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27", "oc delete configmap egress-routes --ignore-not-found", "oc create configmap egress-routes --from-file=destination=my-egress-destination.txt", "apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27", "env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination", "oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-", "oc adm pod-network join-projects --to=<project1> <project2> <project3>", "oc get netnamespaces", "oc adm pod-network isolate-projects <project1> <project2>", "oc adm pod-network make-projects-global <project1> <project2>", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]", "oc get networks.operator.openshift.io -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List", "oc get clusteroperator network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/openshift-sdn-network-plugin
Chapter 24. Scheduler [config.openshift.io/v1]
Chapter 24. Scheduler [config.openshift.io/v1] Description Scheduler holds cluster-wide config information to run the Kubernetes Scheduler and influence its placement decisions. The canonical name for this config is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 24.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description defaultNodeSelector string defaultNodeSelector helps set the cluster-wide default node selector to restrict pod placement to specific nodes. This is applied to the pods created in all namespaces and creates an intersection with any existing nodeSelectors already set on a pod, additionally constraining that pod's selector. For example, defaultNodeSelector: "type=user-node,region=east" would set nodeSelector field in pod spec to "type=user-node,region=east" to all pods created in all namespaces. Namespaces having project-wide node selectors won't be impacted even if this field is set. This adds an annotation section to the namespace. For example, if a new namespace is created with node-selector='type=user-node,region=east', the annotation openshift.io/node-selector: type=user-node,region=east gets added to the project. When the openshift.io/node-selector annotation is set on the project the value is used in preference to the value we are setting for defaultNodeSelector field. For instance, openshift.io/node-selector: "type=user-node,region=west" means that the default of "type=user-node,region=east" set in defaultNodeSelector would not be applied. mastersSchedulable boolean MastersSchedulable allows masters nodes to be schedulable. When this flag is turned on, all the master nodes in the cluster will be made schedulable, so that workload pods can run on them. The default value for this field is false, meaning none of the master nodes are schedulable. Important Note: Once the workload pods start running on the master nodes, extreme care must be taken to ensure that cluster-critical control plane components are not impacted. Please turn on this field after doing due diligence. policy object DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. profile string profile sets which scheduling profile should be set in order to configure scheduling decisions for new pods. Valid values are "LowNodeUtilization", "HighNodeUtilization", "NoScoring" Defaults to "LowNodeUtilization" 24.1.2. .spec.policy Description DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 24.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 24.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/schedulers DELETE : delete collection of Scheduler GET : list objects of kind Scheduler POST : create a Scheduler /apis/config.openshift.io/v1/schedulers/{name} DELETE : delete a Scheduler GET : read the specified Scheduler PATCH : partially update the specified Scheduler PUT : replace the specified Scheduler /apis/config.openshift.io/v1/schedulers/{name}/status GET : read status of the specified Scheduler PATCH : partially update status of the specified Scheduler PUT : replace status of the specified Scheduler 24.2.1. /apis/config.openshift.io/v1/schedulers HTTP method DELETE Description delete collection of Scheduler Table 24.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Scheduler Table 24.2. HTTP responses HTTP code Reponse body 200 - OK SchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a Scheduler Table 24.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.4. Body parameters Parameter Type Description body Scheduler schema Table 24.5. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 202 - Accepted Scheduler schema 401 - Unauthorized Empty 24.2.2. /apis/config.openshift.io/v1/schedulers/{name} Table 24.6. Global path parameters Parameter Type Description name string name of the Scheduler HTTP method DELETE Description delete a Scheduler Table 24.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 24.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Scheduler Table 24.9. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Scheduler Table 24.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.11. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Scheduler Table 24.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.13. Body parameters Parameter Type Description body Scheduler schema Table 24.14. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty 24.2.3. /apis/config.openshift.io/v1/schedulers/{name}/status Table 24.15. Global path parameters Parameter Type Description name string name of the Scheduler HTTP method GET Description read status of the specified Scheduler Table 24.16. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Scheduler Table 24.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.18. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Scheduler Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body Scheduler schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/scheduler-config-openshift-io-v1
probe::socket.send
probe::socket.send Name probe::socket.send - Message sent on a socket. Synopsis socket.send Values flags Socket flags value size Size of message sent (in bytes) or error code if success = 0 type Socket type value protocol Protocol value name Name of this probe family Protocol family value success Was send successful? (1 = yes, 0 = no) state Socket state value Context The message sender
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-send
Part I. Plan
Part I. Plan
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/plan
Chapter 1. Preparing your environment for installation
Chapter 1. Preparing your environment for installation Before you install Satellite, ensure that your environment meets the following requirements. 1.1. System requirements The following requirements apply to the networked base operating system: x86_64 architecture The latest version of Red Hat Enterprise Linux 9 or Red Hat Enterprise Linux 8 4-core 2.0 GHz CPU at a minimum A minimum of 20 GB RAM is required for Satellite Server to function. In addition, a minimum of 4 GB RAM of swap space is also recommended. Satellite running with less RAM than the minimum value might not operate correctly. A unique host name, which can contain lower-case letters, numbers, dots (.) and hyphens (-) A current Red Hat Satellite subscription Administrative user (root) access Full forward and reverse DNS resolution using a fully-qualified domain name Satellite only supports UTF-8 encoding. If your territory is USA and your language is English, set en_US.utf-8 as the system-wide locale settings. For more information about configuring system locale in Red Hat Enterprise Linux, see Configuring the system locale in Red Hat Enterprise Linux 9 Configuring basic system settings . Your Satellite must have the Red Hat Satellite Infrastructure Subscription manifest in your Customer Portal. Satellite must have satellite-capsule-6.x repository enabled and synced. To create, manage, and export a Red Hat Subscription Manifest in the Customer Portal, see Creating and managing manifests for a connected Satellite Server in Subscription Central . Satellite Server and Capsule Server do not support shortnames in the hostnames. When using custom certificates, the Common Name (CN) of the custom certificate must be a fully qualified domain name (FQDN) instead of a shortname. This does not apply to the clients of a Satellite. Before you install Satellite Server, ensure that your environment meets the requirements for installation. Satellite Server must be installed on a freshly provisioned system that serves no other function except to run Satellite Server. The freshly provisioned system must not have the following users provided by external identity providers to avoid conflicts with the local users that Satellite Server creates: apache foreman foreman-proxy postgres pulp puppet redis tomcat Certified hypervisors Satellite Server is fully supported on both physical systems and virtual machines that run on hypervisors that are supported to run Red Hat Enterprise Linux. For more information about certified hypervisors, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, Red Hat OpenShift Virtualization and Red Hat Enterprise Linux with KVM . SELinux mode SELinux must be enabled, either in enforcing or permissive mode. Installation with disabled SELinux is not supported. Synchronized system clock The system clock on the base operating system where you are installing your Satellite Server must be synchronized across the network. If the system clock is not synchronized, SSL certificate verification might fail. For example, you can use the Chrony suite for timekeeping. For more information, see the following documents: Configuring time synchronization in Red Hat Enterprise Linux 9 Configuring basic system settings Configuring time synchronization in Red Hat Enterprise Linux 8 Configuring basic system settings FIPS mode You can install Satellite on a Red Hat Enterprise Linux system that is operating in FIPS mode. You cannot enable FIPS mode after the installation of Satellite. For more information, see Switching RHEL to FIPS mode in Red Hat Enterprise Linux 9 Security hardening or Switching RHEL to FIPS mode in Red Hat Enterprise Linux 8 Security hardening . Note Satellite supports DEFAULT and FIPS crypto-policies. The FUTURE crypto-policy is not supported for Satellite and Capsule installations. The FUTURE policy is a stricter forward-looking security level intended for testing a possible future policy. For more information, see Using system-wide cryptographic policies in Red Hat Enterprise Linux 9 Security hardening . Inter-Satellite Synchronization (ISS) In a scenario with air-gapped Satellite Servers, all your Satellite Servers must be on the same Satellite version for ISS Export Sync to work. ISS Network Sync works across all Satellite versions that support it. For more information, see Synchronizing Content Between Satellite Servers in Managing content . 1.2. Storage requirements The following table details storage requirements for specific directories. These values are based on expected use case scenarios and can vary according to individual environments. The runtime size was measured with Red Hat Enterprise Linux 7, 8, and 9 repositories synchronized. Table 1.1. Storage requirements for a Satellite Server installation Directory Installation Size Runtime Size /var/log 10 MB 10 GB /var/lib/pgsql 100 MB 20 GB /usr 10 GB Not Applicable /opt/puppetlabs 500 MB Not Applicable /var/lib/pulp 1 MB 300 GB For external database servers: /var/lib/pgsql with installation size of 100 MB and runtime size of 20 GB. For detailed information on partitioning and size, see Disk partitions in Red Hat Enterprise Linux 9 Managing storage devices . 1.3. Storage guidelines Consider the following guidelines when installing Satellite Server to increase efficiency. If you mount the /tmp directory as a separate file system, you must use the exec mount option in the /etc/fstab file. If /tmp is already mounted with the noexec option, you must change the option to exec and re-mount the file system. This is a requirement for the puppetserver service to work. Because most Satellite Server data is stored in the /var directory, mounting /var on LVM storage can help the system to scale. Use high-bandwidth, low-latency storage for the /var/lib/pulp/ and PostgreSQL /var/lib/pgsql directories. As Red Hat Satellite has many operations that are I/O intensive, using high latency, low-bandwidth storage causes performance degradation. You can use the storage-benchmark script to get this data. For more information on using the storage-benchmark script, see Impact of Disk Speed on Satellite Operations . File system guidelines Do not use the GFS2 file system as the input-output latency is too high. Log file storage Log files are written to /var/log/messages/, /var/log/httpd/ , and /var/lib/foreman-proxy/openscap/content/ . You can manage the size of these files using logrotate . For more information, see How to use logrotate utility to rotate log files . The exact amount of storage you require for log messages depends on your installation and setup. SELinux considerations for NFS mount When the /var/lib/pulp directory is mounted using an NFS share, SELinux blocks the synchronization process. To avoid this, specify the SELinux context of the /var/lib/pulp directory in the file system table by adding the following lines to /etc/fstab : If NFS share is already mounted, remount it using the above configuration and enter the following command: Duplicated packages Packages that are duplicated in different repositories are only stored once on the disk. Additional repositories containing duplicate packages require less additional storage. The bulk of storage resides in the /var/lib/pulp/ directory. These end points are not manually configurable. Ensure that storage is available on the /var file system to prevent storage problems. Symbolic links You cannot use symbolic links for /var/lib/pulp/ . Synchronized RHEL ISO If you plan to synchronize RHEL content ISOs to Satellite, note that all minor versions of Red Hat Enterprise Linux also synchronize. You must plan to have adequate storage on your Satellite to manage this. 1.4. Supported operating systems You can install the operating system from a disc, local ISO image, Kickstart, or any other method that Red Hat supports. Red Hat Satellite Server is supported on the latest versions of Red Hat Enterprise Linux 9 and Red Hat Enterprise Linux 8 that are available at the time when Satellite Server is installed. versions of Red Hat Enterprise Linux including EUS or z-stream are not supported. The following operating systems are supported by the installer, have packages, and are tested for deploying Satellite: Table 1.2. Operating systems supported by satellite-installer Operating System Architecture Notes Red Hat Enterprise Linux 9 x86_64 only Red Hat Enterprise Linux 8 x86_64 only Red Hat advises against using an existing system because the Satellite installer will affect the configuration of several components. Red Hat Satellite Server requires a Red Hat Enterprise Linux installation with the @Base package group with no other package-set modifications, and without third-party configurations or software not directly necessary for the direct operation of the server. This restriction includes hardening and other non-Red Hat security software. If you require such software in your infrastructure, install and verify a complete working Satellite Server first, then create a backup of the system before adding any non-Red Hat software. Red Hat does not support using the system for anything other than running Satellite Server. 1.5. Supported browsers Satellite supports recent versions of Firefox and Google Chrome browsers. The Satellite web UI and command-line interface support English, Simplified Chinese, Japanese, French. 1.6. Port and firewall requirements For the components of Satellite architecture to communicate, ensure that the required network ports are open and free on the base operating system. You must also ensure that the required network ports are open on any network-based firewalls. Use this information to configure any network-based firewalls. Note that some cloud solutions must be specifically configured to allow communications between machines because they isolate machines similarly to network-based firewalls. If you use an application-based firewall, ensure that the application-based firewall permits all applications that are listed in the tables and known to your firewall. If possible, disable the application checking and allow open port communication based on the protocol. Integrated Capsule Satellite Server has an integrated Capsule and any host that is directly connected to Satellite Server is a Client of Satellite in the context of this section. This includes the base operating system on which Capsule Server is running. Clients of Capsule Hosts which are clients of Capsules, other than Satellite's integrated Capsule, do not need access to Satellite Server. For more information on Satellite Topology and an illustration of port connections, see Capsule Networking in Overview, concepts, and deployment considerations . Required ports can change based on your configuration. The following tables indicate the destination port and the direction of network traffic: Table 1.3. Satellite Server incoming traffic Destination Port Protocol Service Source Required For Description 53 TCP and UDP DNS DNS Servers and clients Name resolution DNS (optional) 67 UDP DHCP Client Dynamic IP DHCP (optional) 69 UDP TFTP Client TFTP Server (optional) 443 TCP HTTPS Capsule Red Hat Satellite API Communication from Capsule 443, 80 TCP HTTPS, HTTP Client Global Registration Registering hosts to Satellite Port 443 is required for registration initiation, uploading facts, and sending installed packages and traces Port 80 notifies Satellite on the /unattended/built endpoint that registration has finished 443 TCP HTTPS Red Hat Satellite Content Mirroring Management 443 TCP HTTPS Red Hat Satellite Capsule API Smart Proxy functionality 443, 80 TCP HTTPS, HTTP Capsule Content Retrieval Content 443, 80 TCP HTTPS, HTTP Client Content Retrieval Content 1883 TCP MQTT Client Pull based REX (optional) Content hosts for REX job notification (optional) 5910 - 5930 TCP HTTPS Browsers Compute Resource's virtual console 8000 TCP HTTP Client Provisioning templates Template retrieval for client installers, iPXE or UEFI HTTP Boot 8000 TCP HTTPS Client PXE Boot Installation 8140 TCP HTTPS Client Puppet agent Client updates (optional) 9090 TCP HTTPS Red Hat Satellite Capsule API Smart Proxy functionality 9090 TCP HTTPS Client OpenSCAP Configure Client (if the OpenSCAP plugin is installed) 9090 TCP HTTPS Discovered Node Discovery Host discovery and provisioning (if the discovery plugin is installed) Any host that is directly connected to Satellite Server is a client in this context because it is a client of the integrated Capsule. This includes the base operating system on which a Capsule Server is running. A DHCP Capsule performs ICMP ping or TCP echo connection attempts to hosts in subnets with DHCP IPAM set to find out if an IP address considered for use is free. This behavior can be turned off using satellite-installer --foreman-proxy-dhcp-ping-free-ip=false . Note Some outgoing traffic returns to Satellite to enable internal communication and security operations. Table 1.4. Satellite Server outgoing traffic Destination Port Protocol Service Destination Required For Description ICMP ping Client DHCP Free IP checking (optional) 7 TCP echo Client DHCP Free IP checking (optional) 22 TCP SSH Target host Remote execution Run jobs 22, 16514 TCP SSH SSH/TLS Compute Resource Satellite originated communications, for compute resources in libvirt 53 TCP and UDP DNS DNS Servers on the Internet DNS Server Resolve DNS records (optional) 53 TCP and UDP DNS DNS Server Capsule DNS Validation of DNS conflicts (optional) 53 TCP and UDP DNS DNS Server Orchestration Validation of DNS conflicts 68 UDP DHCP Client Dynamic IP DHCP (optional) 80 TCP HTTP Remote repository Content Sync Remote repositories 389, 636 TCP LDAP, LDAPS External LDAP Server LDAP LDAP authentication, necessary only if external authentication is enabled. The port can be customized when LDAPAuthSource is defined 443 TCP HTTPS Satellite Capsule Capsule Configuration management Template retrieval OpenSCAP Remote Execution result upload 443 TCP HTTPS Amazon EC2, Azure, Google GCE Compute resources Virtual machine interactions (query/create/destroy) (optional) 443 TCP HTTPS console.redhat.com Red Hat Cloud plugin API calls 443 TCP HTTPS cdn.redhat.com Content Sync Red Hat CDN 443 TCP HTTPS api.access.redhat.com SOS report Assisting support cases filed through the Red Hat Customer Portal (optional) 443 TCP HTTPS cert-api.access.redhat.com Telemetry data upload and report 443 TCP HTTPS Capsule Content mirroring Initiation 443 TCP HTTPS Infoblox DHCP Server DHCP management When using Infoblox for DHCP, management of the DHCP leases (optional) 623 Client Power management BMC On/Off/Cycle/Status 5000 TCP HTTPS OpenStack Compute Resource Compute resources Virtual machine interactions (query/create/destroy) (optional) 5900 - 5930 TCP SSL/TLS Hypervisor noVNC console Launch noVNC console 7911 TCP DHCP, OMAPI DHCP Server DHCP The DHCP target is configured using --foreman-proxy-dhcp-server and defaults to localhost ISC and remote_isc use a configurable port that defaults to 7911 and uses OMAPI 8443 TCP HTTPS Client Discovery Capsule sends reboot command to the discovered host (optional) 9090 TCP HTTPS Capsule Capsule API Management of Capsules 1.7. Enabling connections from a client to Satellite Server Capsules and Content Hosts that are clients of a Satellite Server's internal Capsule require access through Satellite's host-based firewall and any network-based firewalls. Use this procedure to configure the host-based firewall on the system that Satellite is installed on, to enable incoming connections from Clients, and to make the configuration persistent across system reboots. For more information on the ports used, see Port and firewall requirements in Installing Satellite Server in a connected network environment . Procedure Open the ports for clients on Satellite Server: Allow access to services on Satellite Server: Make the changes persistent: Verification Enter the following command: For more information, see Using and configuring firewalld in Red Hat Enterprise Linux 9 Configuring firewalls and packet filters or Using and configuring firewalld in Red Hat Enterprise Linux 8 Configuring and managing networking . 1.8. Verifying DNS resolution Verify the full forward and reverse DNS resolution using a fully-qualified domain name to prevent issues while installing Satellite. Procedure Ensure that the host name and local host resolve correctly: Successful name resolution results in output similar to the following: To avoid discrepancies with static and transient host names, set all the host names on the system by entering the following command: For more information, see Changing a hostname using hostnamectl in Red Hat Enterprise Linux 9 Configuring and managing networking . Warning Name resolution is critical to the operation of Satellite. If Satellite cannot properly resolve its fully qualified domain name, tasks such as content management, subscription management, and provisioning will fail. 1.9. Tuning Satellite Server with predefined profiles If your Satellite deployment includes more than 5000 hosts, you can use predefined tuning profiles to improve performance of Satellite. Note that you cannot use tuning profiles on Capsules. You can choose one of the profiles depending on the number of hosts your Satellite manages and available hardware resources. The tuning profiles are available in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes directory. When you run the satellite-installer command with the --tuning option, deployment configuration settings are applied to Satellite in the following order: The default tuning profile defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml file The tuning profile that you want to apply to your deployment and is defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ directory Optional: If you have configured a /etc/foreman-installer/custom-hiera.yaml file, Satellite applies these configuration settings. Note that the configuration settings that are defined in the /etc/foreman-installer/custom-hiera.yaml file override the configuration settings that are defined in the tuning profiles. Therefore, before applying a tuning profile, you must compare the configuration settings that are defined in the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml , the tuning profile that you want to apply and your /etc/foreman-installer/custom-hiera.yaml file, and remove any duplicated configuration from the /etc/foreman-installer/custom-hiera.yaml file. default Number of hosts: 0 - 5000 RAM: 20G Number of CPU cores: 4 medium Number of hosts: 5001 - 10000 RAM: 32G Number of CPU cores: 8 large Number of hosts: 10001 - 20000 RAM: 64G Number of CPU cores: 16 extra-large Number of hosts: 20001 - 60000 RAM: 128G Number of CPU cores: 32 extra-extra-large Number of hosts: 60000+ RAM: 256G Number of CPU cores: 48+ Procedure Optional: If you have configured the custom-hiera.yaml file on Satellite Server, back up the /etc/foreman-installer/custom-hiera.yaml file to custom-hiera.original . You can use the backup file to restore the /etc/foreman-installer/custom-hiera.yaml file to its original state if it becomes corrupted: Optional: If you have configured the custom-hiera.yaml file on Satellite Server, review the definitions of the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml and the tuning profile that you want to apply in /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ . Compare the configuration entries against the entries in your /etc/foreman-installer/custom-hiera.yaml file and remove any duplicated configuration settings in your /etc/foreman-installer/custom-hiera.yaml file. Enter the satellite-installer command with the --tuning option for the profile that you want to apply. For example, to apply the medium tuning profile settings, enter the following command: 1.10. Requirements for installation in an IPv4 network The following requirements apply to installations in an IPv4 network: An IPv6 loopback must be configured on the base system. The loopback is typically configured by default. Do not disable it. Do not disable IPv6 in kernel by adding the ipv6.disable=1 kernel parameter. For a supported way to disable the IPv6 protocol, see How do I disable the IPv6 protocol on Red Hat Satellite and/or Red Hat Capsule server? in Red Hat Knowledgebase.
[ "nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2", "restorecon -R /var/lib/pulp", "firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"", "firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all", "ping -c1 localhost ping -c1 `hostname -f` # my_system.domain.com", "ping -c1 localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms ping -c1 `hostname -f` PING hostname.gateway (XX.XX.XX.XX) 56(84) bytes of data. 64 bytes from hostname.gateway (XX.XX.XX.XX): icmp_seq=1 ttl=64 time=0.019 ms --- localhost.gateway ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms", "hostnamectl set-hostname name", "cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original", "satellite-installer --tuning medium" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_connected_network_environment/preparing_your_environment_for_installation_satellite
Chapter 3. Red Hat Quay infrastructure
Chapter 3. Red Hat Quay infrastructure Red Hat Quay runs on any physical or virtual infrastructure, both on premise or public cloud. Deployments range from simple to massively scaled, like the following: All-in-one setup on a developer notebook Highly available on virtual machines or on OpenShift Container Platform Geographically dispersed across multiple availability zones and regions 3.1. Running Red Hat Quay on standalone hosts You can automate the standalone deployment process by using Ansible or another automation suite. All standalone hosts require valid a Red Hat Enterprise Linux (RHEL) subscription. Proof of Concept deployment Red Hat Quay runs on a machine with image storage, containerized database, Redis, and optionally, Clair security scanning. Highly available setups Red Hat Quay and Clair run in containers across multiple hosts. You can use systemd units to ensure restart on failure or reboot. High availability setups on standalone hosts require customer-provided load balancers, either low-level TCP load balancers or application load balancers, capable of terminating TLS. 3.2. Running Red Hat Quay on OpenShift The Red Hat Quay Operator for OpenShift Container Platform provides the following features: Automated deployment and management of Red Hat Quay with customization options Management of Red Hat Quay and all of its dependencies Automated scaling and updates Integration with existing OpenShift Container Platform processes like GitOps, monitoring, alerting, logging Provision of object storage with limited availability, backed by the multi-cloud object gateway (NooBaa), as part of the Red Hat OpenShift Data Foundation (ODF) Operator. This service does not require an additional subscription. Scaled-out, high availability object storage provided by the ODF Operator. This service requires an additional subscription. Red Hat Quay can run on OpenShift Container Platform infrastructure nodes. As a result, no further subscriptions are required. Running Red Hat Quay on OpenShift Container Platform has the following benefits: Zero to Hero: Simplified deployment of Red Hat Quay and associated components means that you can start using the product immediately Scalability: Use cluster compute capacity to manage demand through automated scaling, based on actual load Simplified Networking: Automated provisioning of load balancers and traffic ingress secured through HTTPS using OpenShift Container Platform TLS certificates and Routes Declarative configuration management: Configurations stored in CustomResource objects for GitOps-friendly lifecycle management Repeatability: Consistency regardless of the number of replicas of Red Hat Quay and Clair OpenShift integration: Additional services to use OpenShift Container Platform Monitoring and Alerting facilities to manage multiple Red Hat Quay deployments on a single cluster 3.3. Integrating standalone Red Hat Quay with OpenShift Container Platform While the Red Hat Quay Operator ensures seamless deployment and management of Red Hat Quay running on OpenShift Container Platform, it is also possible to run Red Hat Quay in standalone mode and then serve content to one or many OpenShift Container Platform clusters, wherever they are running. Integrating standalone Red Hat Quay with OpenShift Container Platform Several Operators are available to help integrate standalone and Operator based deployments ofRed Hat Quay with OpenShift Container Platform, like the following: Red Hat Quay Cluster Security Operator Relays Red Hat Quay vulnerability scanning results into the OpenShift Container Platform console Red Hat Quay Bridge Operator Ensures seamless integration and user experience by using Red Hat Quay with OpenShift Container Platform in conjunction with OpenShift Container Platform Builds and ImageStreams 3.4. Mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is small-scale version of Red Hat Quay that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift Container Platform subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with pre-configured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. The mirror registry for Red Hat OpenShift is limited to hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as release images or Operator images. It uses local storage. Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Unlike Red Hat Quay, the mirror registry for Red Hat OpenShift is not a highly-available registry. Only local file system storage is supported. Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged, because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to use the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. More information is available at Creating a mirror registry with mirror registry for Red Hat OpenShift . 3.5. Single compared to multiple registries Many users consider running multiple, distinct registries. The preferred approach with Red Hat Quay is to have a single, shared registry: If you want a clear separation between development and production images, or a clear separation by content origin, for example, keeping third-party images distinct from internal ones, you can use organizations and repositories, combined with role-based access control (RBAC), to achieve the desired separation. Given that the image registry is a critical component in an enterprise environment, you may be tempted to use distinct deployments to test upgrades of the registry software to newer versions. The Red Hat Quay Operator updates the registry for patch releases as well as minor or major updates. This means that any complicated procedures are automated and, as a result, there is no requirement for you to provision multiple instances of the registry to test the upgrade. With Red Hat Quay, there is no need to have a separate registry for each cluster you deploy. Red Hat Quay is proven to work at scale at Quay.io , and can serve content to thousands of clusters. Even if you have deployments in multiple data centers, you can still use a single Red Hat Quay instance to serve content to multiple physically-close data centers, or use the HA functionality with load balancers to stretch across data centers. Alternatively, you can use the Red Hat Quay geo-replication feature to stretch across physically distant data centers. This requires the provisioning of a global load balancer or DNS-based geo-aware load balancing. One scenario where it may be appropriate to run multiple distinct registries, is when you want to specify different configuration for each registry. In summary, running a shared registry helps you to save storage, infrastructure and operational costs, but a dedicated registry might be needed in specific circumstances.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_architecture/arch-quay-infrastructure
5.5.2. File Systems
5.5.2. File Systems Even with the proper mass storage device, properly configured, and appropriately partitioned, we would still be unable to store and retrieve information easily -- we are missing a way of structuring and organizing that information. What we need is a file system . The concept of a file system is so fundamental to the use of mass storage devices that the average computer user often does not even make the distinction between the two. However, system administrators cannot afford to ignore file systems and their impact on day-to-day work. A file system is a method of representing data on a mass storage device. File systems usually include the following features: File-based data storage Hierarchical directory (sometimes known as "folder") structure Tracking of file creation, access, and modification times Some level of control over the type of access allowed for a specific file Some concept of file ownership Accounting of space utilized Not all file systems posses every one of these features. For example, a file system constructed for a single-user operating system could easily use a more simplified method of access control and could conceivably do away with support for file ownership altogether. One point to keep in mind is that the file system used can have a large impact on the nature of your daily workload. By ensuring that the file system you use in your organization closely matches your organization's functional requirements, you can ensure that not only is the file system up to the task, but that it is more easily and efficiently maintainable. With this in mind, the following sections explore these features in more detail. 5.5.2.1. File-Based Storage While file systems that use the file metaphor for data storage are so nearly universal as to be considered a given, there are still some aspects that should be considered here. First is to be aware of any restrictions on file names. For instance, what characters are permitted in a file name? What is the maximum file name length? These questions are important, as it dictates those file names that can be used and those that cannot. Older operating systems with more primitive file systems often allowed only alphanumeric characters (and only uppercase at that), and only traditional 8.3 file names (meaning an eight-character file name, followed by a three-character file extension).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-usable-fs
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Find available storage devices . Create OpenShift Data Foundation cluster on IBM Power . 2.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.13 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 2.3. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Note Use of Vault namespaces are not supported with the Kubernetes authentication method in OpenShift Data Foundation 4.11. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.5. Finding available storage devices Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform. Example output: In this example, for worker-0, the available local devices of 500G are sda , sdc , sde , sdg , sdi , sdk , sdm , sdo . Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.6. Creating OpenShift Data Foundation cluster on IBM Power Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power. Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation: To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace Click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for block PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . Confirm whether diskmaker-manager pods and Persistent Volumes are created. For Pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-local-storage from the Project drop-down list. Check if there are diskmaker-manager pods for each of the worker node that you used while creating LocalVolume CR. For Persistent Volumes Click Storage PersistentVolumes from the left pane of the OpenShift Web Console. Check the Persistent Volumes with the name local-pv-* . Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. This feature is available only in new deployments of Red Hat OpenShift Data Foundation versions 4.7 and later. Storage clusters upgraded from a version to version 4.7 or later do not support flexible scaling. For more information, see Flexible scaling of OpenShift Data Foundation cluster in the New features section of Release Notes . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the required Storage Class that you used while installing LocalVolume. By default, it is set to none . Click . In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). The Selected nodes list shows the nodes based on the storage class. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select Default (SDN) network as Multus is not yet supported on OpenShift Data Foundation on IBM Power. Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page:: Review the configurations details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "oc get nodes -l cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edf", "oc debug node/<node name>", "oc debug node/worker-0 Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.0.63 If you don't see a command prompt, try pressing enter. sh-4.4# sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 500G 0 loop sda 8:0 0 500G 0 disk sdb 8:16 0 120G 0 disk |-sdb1 8:17 0 4M 0 part |-sdb3 8:19 0 384M 0 part `-sdb4 8:20 0 119.6G 0 part sdc 8:32 0 500G 0 disk sdd 8:48 0 120G 0 disk |-sdd1 8:49 0 4M 0 part |-sdd3 8:51 0 384M 0 part `-sdd4 8:52 0 119.6G 0 part sde 8:64 0 500G 0 disk sdf 8:80 0 120G 0 disk |-sdf1 8:81 0 4M 0 part |-sdf3 8:83 0 384M 0 part `-sdf4 8:84 0 119.6G 0 part sdg 8:96 0 500G 0 disk sdh 8:112 0 120G 0 disk |-sdh1 8:113 0 4M 0 part |-sdh3 8:115 0 384M 0 part `-sdh4 8:116 0 119.6G 0 part sdi 8:128 0 500G 0 disk sdj 8:144 0 120G 0 disk |-sdj1 8:145 0 4M 0 part |-sdj3 8:147 0 384M 0 part `-sdj4 8:148 0 119.6G 0 part sdk 8:160 0 500G 0 disk sdl 8:176 0 120G 0 disk |-sdl1 8:177 0 4M 0 part |-sdl3 8:179 0 384M 0 part `-sdl4 8:180 0 119.6G 0 part /sysroot sdm 8:192 0 500G 0 disk sdn 8:208 0 120G 0 disk |-sdn1 8:209 0 4M 0 part |-sdn3 8:211 0 384M 0 part /boot `-sdn4 8:212 0 119.6G 0 part sdo 8:224 0 500G 0 disk sdp 8:240 0 120G 0 disk |-sdp1 8:241 0 4M 0 part |-sdp3 8:243 0 384M 0 part `-sdp4 8:244 0 119.6G 0 part", "get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}'", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Block", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_ibm_power/deploy-using-local-storage-devices-ibm-power
Preface
Preface As a cluster administrator, you can configure either automatic or manual upgrade of the OpenShift AI Operator.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/upgrading_openshift_ai_self-managed/pr01
Chapter 7. Console [config.openshift.io/v1]
Chapter 7. Console [config.openshift.io/v1] Description Console holds cluster-wide configuration for the web console, including the logout URL, and reports the public URL of the console. The canonical name is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 7.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description authentication object ConsoleAuthentication defines a list of optional configuration for console authentication. 7.1.2. .spec.authentication Description ConsoleAuthentication defines a list of optional configuration for console authentication. Type object Property Type Description logoutRedirect string An optional, absolute URL to redirect web browsers to after logging out of the console. If not specified, it will redirect to the default login page. This is required when using an identity provider that supports single sign-on (SSO) such as: - OpenID (Keycloak, Azure) - RequestHeader (GSSAPI, SSPI, SAML) - OAuth (GitHub, GitLab, Google) Logging out of the console will destroy the user's token. The logoutRedirect provides the user the option to perform single logout (SLO) through the identity provider to destroy their single sign-on session. 7.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description consoleURL string The URL for the console. This will be derived from the host for the route that is created for the console. 7.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/consoles DELETE : delete collection of Console GET : list objects of kind Console POST : create a Console /apis/config.openshift.io/v1/consoles/{name} DELETE : delete a Console GET : read the specified Console PATCH : partially update the specified Console PUT : replace the specified Console /apis/config.openshift.io/v1/consoles/{name}/status GET : read status of the specified Console PATCH : partially update status of the specified Console PUT : replace status of the specified Console 7.2.1. /apis/config.openshift.io/v1/consoles Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Console Table 7.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Console Table 7.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleList schema 401 - Unauthorized Empty HTTP method POST Description create a Console Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body Console schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 202 - Accepted Console schema 401 - Unauthorized Empty 7.2.2. /apis/config.openshift.io/v1/consoles/{name} Table 7.9. Global path parameters Parameter Type Description name string name of the Console Table 7.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Console Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.12. Body parameters Parameter Type Description body DeleteOptions schema Table 7.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Console Table 7.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.15. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Console Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.17. Body parameters Parameter Type Description body Patch schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Console Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body Console schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty 7.2.3. /apis/config.openshift.io/v1/consoles/{name}/status Table 7.22. Global path parameters Parameter Type Description name string name of the Console Table 7.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Console Table 7.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.25. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Console Table 7.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.27. Body parameters Parameter Type Description body Patch schema Table 7.28. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Console Table 7.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.30. Body parameters Parameter Type Description body Console schema Table 7.31. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/console-config-openshift-io-v1
Chapter 22. Kernel
Chapter 22. Kernel kpatch , see the section called "Dynamic kernel Patching" crashkernel with more than one CPU, see the section called "Crashkernel with More than 1 CPU" dm-era device-mapper target, see the section called "dm-era Target" Cisco VIC kernel driver, see the section called "Cisco VIC kernel Driver" NFSoRDMA Available, see the section called "NFSoRDMA Available" Runtime Instrumentation for IBM System z, see the section called "Runtime Instrumentation for IBM System z" Cisco usNIC Driver, see the section called "Cisco usNIC Driver" Intel Ethernet Server Adapter X710/XL710 Driver Update, see the section called "Intel Ethernet Server Adapter X710/XL710 Driver Update"
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-tp-kernel
3.13. Example: Create Virtual Machine Storage Disk
3.13. Example: Create Virtual Machine Storage Disk The following example creates an 8 GB Copy-On-Write storage disk for the example virtual machine. Example 3.15. Create a virtual machine storage disk Request: cURL command: The storage_domain element tells the API to store the disk on the data1 storage domain.
[ "POST /ovirt-engine/api/vms/6efc0cfa-8495-4a96-93e5-ee490328cf48/disks HTTP/1.1 Accept: application/xml Content-type: application/xml <disk> <storage_domains> <storage_domain id=\"9ca7cb40-9a2a-4513-acef-dc254af57aac\"/> </storage_domains> <size>8589934592</size> <type>system</type> <interface>virtio</interface> <format>cow</format> <bootable>true</bootable> </disk>", "curl -X POST -H \"Accept: application/xml\" -H \"Content-Type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<disk><storage_domains> <storage_domain id='9ca7cb40-9a2a-4513-acef-dc254af57aac'/> </storage_domains><size>8589934592</size><type>system</type> <interface>virtio</interface><format>cow</format> <bootable>true</bootable></disk>\" https:// [RHEVM Host] :443/ovirt-engine/api/vms/6efc0cfa-8495-4a96-93e5-ee490328cf48/disks" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/example_create_virtual_machine_storage_disk
Chapter 11. Viewing Threads
Chapter 11. Viewing Threads You can view and monitor the state of threads. Procedure : Click the Runtime tab and then the Threads subtab. The Threads page lists active threads and stack trace details for each thread. By default, the thread list shows all threads in descending ID order. To sort the list by increasing ID, click the ID column label. Optionally, filter the list by thread state (for example, Blocked ) or by thread name. To drill down to detailed information for a specific thread, such as the lock class name and full stack trace for that thread, in the Actions column, click More .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/hawtio_diagnostic_console_guide/viewing-threads
Chapter 2. Jenkins agent
Chapter 2. Jenkins agent OpenShift Container Platform provides a base image for use as a Jenkins agent. The Base image for Jenkins agents does the following: Pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones, including git , tar , zip , and nss , among others. Establishes the JNLP agent as the entry point. Includes the oc client tool for invoking command line operations from within Jenkins jobs. Provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images. Important Use a version of the agent image that is appropriate for your OpenShift Container Platform release version. Embedding an oc client version that is not compatible with the OpenShift Container Platform version can cause unexpected behavior. The OpenShift Container Platform Jenkins image also defines the following sample java-builder pod template to illustrate how you can use the agent image with the Jenkins Kubernetes plugin. The java-builder pod template employs two containers: * A jnlp container that uses the OpenShift Container Platform Base agent image and handles the JNLP contract for starting and stopping Jenkins agents. * A java container that uses the java OpenShift Container Platform Sample ImageStream, which contains the various Java binaries, including the Maven binary mvn , for building code. 2.1. Jenkins agent images The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io . Jenkins images are available through the Red Hat Registry: USD docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag> USD docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag> To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry. 2.2. Jenkins agent environment variables Each Jenkins agent container can be configured with the following environment variables. Variable Definition Example values and settings JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value USE_JAVA_VERSION Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0 . If you extend the container base image, you can specify any alternative version of java using its associated suffix. The default value is java-11 . Example setting: java-1.8.0 2.3. Jenkins agent memory requirements A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac , Maven, or Gradle. By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely. By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill. By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads. 2.4. Jenkins agent Gradle builds Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified. The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required. Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file. Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument. To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file. Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file. Override the Gradle JVM memory parameters by the GRADLE_OPTS , JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables. Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle , or through the -Dorg.gradle.jvmargs command line argument. 2.5. Jenkins agent pod retention Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plugin pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported: Always keeps the build pod regardless of build result. Default uses the plugin value, which is the pod template only. Never always deletes the pod. On Failure keeps the pod if it fails during the build. You can override pod retention in the pipeline Jenkinsfile: podTemplate(label: "mypod", cloud: "openshift", inheritFrom: "maven", podRetention: onFailure(), 1 containers: [ ... ]) { node("mypod") { ... } } 1 Allowed values for podRetention are never() , onFailure() , always() , and default() . Warning Pods that are kept might continue to run and count against resource quotas.
[ "docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>", "docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>", "podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/jenkins/images-other-jenkins-agent
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deduplicating_and_compressing_logical_volumes_on_rhel/proc_providing-feedback-on-red-hat-documentation_deduplicating-and-compressing-logical-volumes-on-rhel
Providing feedback on Red Hat build of Quarkus documentation
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/openid_connect_oidc_authentication/proc_providing-feedback-on-red-hat-documentation_security-oidc-authentication
High Availability Guide
High Availability Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/index
Chapter 9. ImageStreamTag [image.openshift.io/v1]
Chapter 9. ImageStreamTag [image.openshift.io/v1] Description ImageStreamTag represents an Image that is retrieved by tag name from an ImageStream. Use this resource to interact with the tags and images in an image stream by tag, or to see the image details for a particular tag. The image associated with this resource is the most recently successfully tagged, imported, or pushed image (as described in the image stream status.tags.items list for this tag). If an import is in progress or has failed the image will be shown. Deleting an image stream tag clears both the status and spec fields of an image stream. If no image can be retrieved for a given tag, a not found error will be returned. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required tag generation lookupPolicy image 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array conditions is an array of conditions that apply to the image stream tag. conditions[] object TagEventCondition contains condition information for a tag event. generation integer generation is the current generation of the tagged image - if tag is provided and this value is not equal to the tag generation, a user has requested an import that has not completed, or conditions will be filled out indicating any error. image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lookupPolicy object ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata tag object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. 9.1.1. .conditions Description conditions is an array of conditions that apply to the image stream tag. Type array 9.1.2. .conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 9.1.3. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 9.1.4. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 9.1.5. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 9.1.6. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 9.1.7. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 9.1.8. .image.signatures Description Signatures holds all signatures of the image. Type array 9.1.9. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 9.1.10. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 9.1.11. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 9.1.12. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 9.1.13. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 9.1.14. .lookupPolicy Description ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. Type object Required local Property Type Description local boolean local will change the docker short image references (like "mysql" or "php:latest") on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The tag's referencePolicy is taken into account on the replaced value. Only works within the current namespace. 9.1.15. .tag Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 9.1.16. .tag.importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 9.1.17. .tag.referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 9.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagestreamtags GET : list objects of kind ImageStreamTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags GET : list objects of kind ImageStreamTag POST : create an ImageStreamTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags/{name} DELETE : delete an ImageStreamTag GET : read the specified ImageStreamTag PATCH : partially update the specified ImageStreamTag PUT : replace the specified ImageStreamTag 9.2.1. /apis/image.openshift.io/v1/imagestreamtags HTTP method GET Description list objects of kind ImageStreamTag Table 9.1. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTagList schema 401 - Unauthorized Empty 9.2.2. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags HTTP method GET Description list objects of kind ImageStreamTag Table 9.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTagList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageStreamTag Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body ImageStreamTag schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 202 - Accepted ImageStreamTag schema 401 - Unauthorized Empty 9.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the ImageStreamTag HTTP method DELETE Description delete an ImageStreamTag Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageStreamTag Table 9.9. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageStreamTag Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageStreamTag Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body ImageStreamTag schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/image_apis/imagestreamtag-image-openshift-io-v1
8.188. pykickstart
8.188. pykickstart 8.188.1. RHBA-2014:1541 - pykickstart bug fix and enhancement update An updated pykickstart package that fixes one bug and adds two enhancements is now available for Red Hat Enterprise Linux 6. The pykickstart package contains a python library for manipulating kickstart files. Additionally, this update adds the following enhancement: Note The pykickstart package has been upgraded to upstream version 1.74.15, which provides one bug fix and one enhancement over the version. This version of pykickstart, or higher, is required for the latest version of Anaconda to work properly. (BZ# 1108543 ) Additionally, this update adds the following enhancement: Enhancement BZ# 1125410 The pykickstart package has been modified to support installation of Docker images in Anaconda. Users of pykickstart are advised to upgrade to this updated package, which fixes this bug and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/pykickstart
7.265. virt-manager
7.265. virt-manager 7.265.1. RHBA-2013:0451 - virt-manager bug fix and enhancement update Updated virt-manager packages that fix three bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. Virtual Machine Manager (virt-manager) is a graphical tool for administering virtual machines for KVM, Xen, and QEMU. The virt-manager utility uses the libvirt API and can start, stop, add or remove virtualized devices, connect to a graphical or serial console, and view resource usage statistics for existing virtualized guests on local or remote machines. Bug Fixes BZ#802639 Previously, the live migration dialog box of the virt-manager tool incorrectly described the unit of bandwidth as "Mbps" instead of "MB/s". With this update, the migration dialog has been changed to provide correct information on bandwidth units. BZ# 824275 Prior to this update, an unnecessary reboot occurred after the virt-manager tool created a new guest virtual machine by importing an existing disk image. With this update, a backported upstream patch has been provided, and virt-manager no longer restarts after importing an existing disk image. BZ# 872611 Due to differences in dependency solving between the yum and rpm programs, the virt-manager package failed to update from "noarch" to newer architecture version. With this update, a patch has been provided to mark the noarch version as obsolete. As a result, the noarch package can now be updated without complications. Enhancement BZ#878946 The "Delete Associated storage files" option is enabled by default in the virt-manager tool. A warning message is displayed prior to file deletion to notify the user about this configuration. All users of virt-manager are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/virt-manager
Chapter 2. OpenShift Data Foundation upgrade channels and releases
Chapter 2. OpenShift Data Foundation upgrade channels and releases In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. As OpenShift Data Foundation gets deployed as an operator in OpenShift Container Platform, it follows the same strategy to control the pace of upgrades by shipping the fixes in multiple channels. Upgrade channels are tied to a minor version of OpenShift Data Foundation. For example, OpenShift Data Foundation 4.13 upgrade channels recommend upgrades within 4.13. Upgrades to future releases is not recommended. This strategy ensures that administrators can explicitly decide to upgrade to the minor version of OpenShift Data Foundation. Upgrade channels control only release selection and do not impact the version of the cluster that you install; the odf-operator decides the version of OpenShift Data Foundation to be installed. By default, it always installs the latest OpenShift Data Foundation release maintaining the compatibility with OpenShift Container Platform. So, on OpenShift Container Platform 4.13, OpenShift Data Foundation 4.13 will be the latest version which can be installed. OpenShift Data Foundation upgrades are tied to the OpenShift Container Platform upgrade to ensure that compatibility and interoperability are maintained with the OpenShift Container Platform. For OpenShift Data Foundation 4.13, OpenShift Container Platform 4.13 and 4.14 (when generally available) are supported. OpenShift Container Platform 4.13 is supported to maintain forward compatibility of OpenShift Data Foundation with OpenShift Container Platform. Keep the OpenShift Data Foundation version the same as OpenShift Container Platform in order to get the benefit of all the features and enhancements in that release. Important Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.11 to 4.12 and then to 4.13. You cannot update from OpenShift Container Platform 4.10 to 4.12 directly. For more information, see Preparing to perform an EUS-to-EUS update of the Updating clusters guide in OpenShift Container Platform documentation. OpenShift Data Foundation 4.13 offers the following upgrade channel: stable-4.13 eus-4.y (only when running an even-numbered 4.y cluster release, like 4.13) stable-4.13 channel Once a new version is Generally Available, the stable channel corresponding to the minor version gets updated with the new image which can be used to upgrade. You can use the stable-4.13 channel to upgrade from OpenShift Data Foundation 4.12 and upgrades within 4.13. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Data Foundation offer Extended Update Support (EUS). These EUS versions extend the Full and Maintenance support phases for customers with Standard and Premium Subscriptions to 18 months. The only difference between stable-4.y and eus-4.y channels is that the EUS channels will include the release only when the EUS release is available.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/updating_openshift_data_foundation/openshift-data-foundation-upgrade-channels-and-releases_rhodf
Installing on IBM Power
Installing on IBM Power OpenShift Container Platform 4.15 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power/index
Getting started with Red Hat OpenShift AI Self-Managed
Getting started with Red Hat OpenShift AI Self-Managed Red Hat OpenShift AI Self-Managed 2.18 Learn how to work in an OpenShift AI environment
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/getting_started_with_red_hat_openshift_ai_self-managed/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/getting_started_with_red_hat_build_of_openjdk_17/making-open-source-more-inclusive
12.4.3. Creating an LVM-based Storage Pool with virsh
12.4.3. Creating an LVM-based Storage Pool with virsh This section outlines the steps required to create an LVM-based storage pool with the virsh command. It uses the example of a pool named guest_images_lvm from a single drive ( /dev/sdc ). This is only an example and your settings should be substituted as appropriate. Procedure 12.3. Creating an LVM-based storage pool with virsh Define the pool name guest_images_lvm . Build the pool according to the specified name. If you are using an already existing volume group, skip this step. Initialize the new pool. Show the volume group information with the vgs command. Set the pool to start automatically. List the available pools with the virsh command. The following commands demonstrate the creation of three volumes ( volume1 , volume2 and volume3 ) within this pool. List the available volumes in this pool with the virsh command. The following two commands ( lvscan and lvs ) display further information about the newly created volumes.
[ "virsh pool-define-as guest_images_lvm logical - - /dev/sdc libvirt_lvm \\ /dev/ libvirt_lvm Pool guest_images_lvm defined", "virsh pool-build guest_images_lvm Pool guest_images_lvm built", "virsh pool-start guest_images_lvm Pool guest_images_lvm started", "vgs VG #PV #LV #SN Attr VSize VFree libvirt_lvm 1 0 0 wz--n- 465.76g 465.76g", "virsh pool-autostart guest_images_lvm Pool guest_images_lvm marked as autostarted", "virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_lvm active yes", "virsh vol-create-as guest_images_lvm volume1 8G Vol volume1 created virsh vol-create-as guest_images_lvm volume2 8G Vol volume2 created virsh vol-create-as guest_images_lvm volume3 8G Vol volume3 created", "virsh vol-list guest_images_lvm Name Path ----------------------------------------- volume1 /dev/libvirt_lvm/volume1 volume2 /dev/libvirt_lvm/volume2 volume3 /dev/libvirt_lvm/volume3", "lvscan ACTIVE '/dev/libvirt_lvm/volume1' [8.00 GiB] inherit ACTIVE '/dev/libvirt_lvm/volume2' [8.00 GiB] inherit ACTIVE '/dev/libvirt_lvm/volume3' [8.00 GiB] inherit lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert volume1 libvirt_lvm -wi-a- 8.00g volume2 libvirt_lvm -wi-a- 8.00g volume3 libvirt_lvm -wi-a- 8.00g" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/create-lvm-storage-pool-virsh
Chapter 1. Support overview
Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can Review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. Diagnostic data : Use the redhat-support-tool command to gather(?) diagnostic data about your cluster. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. Logging issues : A cluster administrator can follow the procedures in the "Support" and "Troubleshooting logging" sections to resolve logging issues: Viewing the status of the Red Hat OpenShift Logging Operator . Viewing the status of logging components . Troubleshooting logging alerts . Collecting information about your logging environment by using the oc adm must-gather command . OpenShift CLI ( oc ) issues : Investigate OpenShift CLI ( oc ) issues by increasing the log level.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/support/support-overview
Chapter 9. Monitoring hosts using Red Hat Insights
Chapter 9. Monitoring hosts using Red Hat Insights You can use Insights to diagnose systems and downtime related to security exploits, performance degradation, and stability failures. You can use the Insights dashboard to quickly identify key risks to stability, security, and performance. You can sort by category, view details of the impact and resolution, and then determine what systems are affected. To use Insights to monitor hosts that you manage with Satellite, you must first install Insights on your hosts and register your hosts with Insights. For new Satellite hosts, you can install and configure Insights during host registration to Satellite. For more information, see Section 4.3, "Registering hosts by using global registration" . For hosts already registered to Satellite, you can install and configure Insights on your hosts by using an Ansible role. For more information, see Section 9.3, "Deploying Red Hat Insights using the Ansible role" . Additional information To view the logs for all plugins, go to /var/log/foreman/production.log . If you have problems connecting to Insights, ensure that your certificates are up-to-date. Refresh your subscription manifest to update your certificates. You can change the default schedule for running insights-client by configuring insights-client.timer on a host. For more information, see Changing the insights-client schedule in the Client Configuration Guide for Red Hat Insights . 9.1. Access to information from Insights in Satellite You can access the additional information available for hosts from Red Hat Insights in the following places in the Satellite web UI: Navigate to Configure > Insights where the vertical ellipsis to the Remediate button provides a View in Red Hat Insights link to the general recommendations page. On each recommendation line, the vertical ellipsis provides a View in Red Hat Insights link to the recommendation rule, and, if one is available for that recommendation, a Knowledgebase article link. For additional information, navigate to Hosts > All Hosts . If the host has recommendations listed, click on the number of recommendations. On the Insights tab, the vertical ellipsis to the Remediate button provides a Go To Satellite Insights page link to information for the system, and a View in Red Hat Insights link to host details on the console. 9.2. Excluding hosts from rh-cloud and insights-client reports You can set the host_registration_insights parameter to False to omit rh-cloud and insights-client reports. Satellite will exclude the hosts from rh-cloud reports and block insights-client from uploading a report to the cloud. Use the following procedure to change the value of host_registration_insights parameter: Procedure In the Satellite web UI, navigate to Host > All Hosts . Select any host for which you want to change the value. On the Parameters tab, click on the edit button of host_registration_insights . Set the value to False . This parameter can also be set at the organization, hostgroup, subnet, and domain level. Also, it automatically prevents new reports from being uploaded as long as they are associated with the entity. If you set the parameter to false on a host that is already reported on the Red Hat Hybrid Cloud Console , it will be still removed automatically from the inventory. However, this process can take some time to complete. 9.3. Deploying Red Hat Insights using the Ansible role The RedHatInsights.insights-client Ansible role is used to automate the installation and registration of hosts with Insights. For more information about adding this role to your Satellite, see Getting Started with Ansible in Satellite in Managing configurations using Ansible integration . Procedure Add the RedHatInsights.insights-client role to the hosts. For new hosts, see Section 2.1, "Creating a host in Red Hat Satellite" . For existing hosts, see Using Ansible Roles to Automate Repetitive Tasks on Clients in Managing configurations using Ansible integration . To run the RedHatInsights.insights-client role on your host, navigate to Hosts > All Hosts and click the name of the host that you want to use. On the host details page, expand the Schedule a job dropdown menu. Click Run Ansible roles . 9.4. Configuring synchronization of Insights recommendations for hosts You can enable automatic synchronization of the recommendations from Red Hat Hybrid Cloud Console that occurs daily by default. If you leave the setting disabled, you can synchronize the recommendations manually. Procedures To get the recommendations automatically: In the Satellite web UI, navigate to Configure > Insights . Enable Sync Automatically . To get the recommendations manually: In the Satellite web UI, navigate to Configure > Insights . On the vertical ellipsis, click Sync Recommendations . 9.5. Configuring automatic removal of hosts from the Insights Inventory When hosts are removed from Satellite, they can also be removed from the inventory of Red Hat Insights, either automatically or manually. You can configure automatic removal of hosts from the Insights Inventory during Red Hat Hybrid Cloud Console synchronization with Satellite that occurs daily by default. If you leave the setting disabled, you can still remove the bulk of hosts from the Inventory manually. Prerequisites Your user account must have the permission of view_foreman_rh_cloud to view the Inventory Upload page in Satellite web UI. Procedure In the Satellite web UI, navigate to Configure > Inventory Upload . Enable the Automatic Mismatch Deletion setting. 9.6. Creating an Insights remediation plan for hosts With Satellite, you can create a Red Hat Insights remediation plan and run the plan on Satellite hosts. Procedure In the Satellite web UI, navigate to Configure > Insights . On the Red Hat Insights page, select the number of recommendations that you want to include in an Insights plan. You can only select the recommendations that have an associated playbook. Click Remediate . In the Remediation Summary window, you can select the Resolutions to apply. Use the Filter field to search for specific keywords. Click Remediate . In the Job Invocation page, do not change the contents of precompleted fields. Optional. For more advanced configuration of the Remote Execution Job, click Show Advanced Fields . Select the Type of query you require. Select the Schedule you require. Click Submit . Alternatively: In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. On the Host details page, click Recommendations . On the Red Hat Insights page, select the number of recommendations you want to include in an Insights plan and proceed as before. In the Jobs window, you can view the progress of your plan.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/monitoring_hosts_using_red_hat_insights_managing-hosts
Chapter 1. Web Console Overview
Chapter 1. Web Console Overview The Red Hat OpenShift Container Platform web console provides a graphical user interface to visualize your project data and perform administrative, management, and troubleshooting tasks. The web console runs as pods on the control plane nodes in the openshift-console project. It is managed by a console-operator pod. Both Administrator and Developer perspectives are supported. Both Administrator and Developer perspectives enable you to create quick start tutorials for OpenShift Container Platform. A quick start is a guided tutorial with user tasks and is useful for getting oriented with an application, Operator, or other product offering. 1.1. About the Administrator perspective in the web console The Administrator perspective enables you to view the cluster inventory, capacity, general and specific utilization information, and the stream of important events, all of which help you to simplify planning and troubleshooting tasks. Both project administrators and cluster administrators can view the Administrator perspective. Cluster administrators can also open an embedded command line terminal instance with the web terminal Operator in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Administrator perspective is displayed by default if the user is recognized as an administrator. The Administrator perspective provides workflows specific to administrator use cases, such as the ability to: Manage workload, storage, networking, and cluster settings. Install and manage Operators using the Operator Hub. Add identity providers that allow users to log in and manage user access through roles and role bindings. View and manage a variety of advanced settings such as cluster updates, partial cluster updates, cluster Operators, custom resource definitions (CRDs), role bindings, and resource quotas. Access and manage monitoring features such as metrics, alerts, and monitoring dashboards. View and manage logging, metrics, and high-status information about the cluster. Visually interact with applications, components, and services associated with the Administrator perspective in OpenShift Container Platform. 1.2. About the Developer perspective in the web console The Developer perspective offers several built-in ways to deploy applications, services, and databases. In the Developer perspective, you can: View real-time visualization of rolling and recreating rollouts on the component. View the application status, resource utilization, project event streaming, and quota consumption. Share your project with others. Troubleshoot problems with your applications by running Prometheus Query Language (PromQL) queries on your project and examining the metrics visualized on a plot. The metrics provide information about the state of a cluster and any user-defined workloads that you are monitoring. Cluster administrators can also open an embedded command line terminal instance in the web console in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Developer perspective is displayed by default if the user is recognised as a developer. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on OpenShift Container Platform by importing existing codebases, images, and container files. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using Eclipse Che. You can use the Topology view to display applications, components, and workloads of your project. If you have no workloads in the project, the Topology view will show some links to create or import them. You can also use the Quick Search to import components directly. Additional Resources See Viewing application composition using the Topology view for more information on using the Topology view in Developer perspective. 1.3. Accessing the Perspectives You can access the Administrator and Developer perspective from the web console as follows: Prerequisites To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The Administrator perspective is selected for users with access to all projects, while the Developer perspective is selected for users with limited access to their own projects Additional Resources See Adding User Preferences for more information on changing perspectives. Procedure Use the perspective switcher to switch to the Administrator or Developer perspective. Select an existing project from the Project drop-down list. You can also create a new project from this dropdown. Note You can use the perspective switcher only as cluster-admin . Additional resources Learn more about Cluster Administrator Overview of the Administrator perspective Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view Viewing cluster information Configuring the web console Customizing the web console Using the web terminal Creating quick start tutorials Disabling the web console
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/web_console/web-console-overview
2.3. Deprecated Features
2.3. Deprecated Features Note This is the last Red Hat Gluster Storage release supporting Red Hat Enterprise Linux 6. Customers are adviced to upgrade to Red Hat Enterprise Linux 7 . The following features are deprecated as of Red Hat Gluster Storage 3.5, or will be considered deprecated in subsequent releases. Review individual items for details about the likely removal time frame of the feature. Gluster-NFS Gluster-NFS is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of Gluster-NFS, and does not support its use in new deployments on Red Hat Gluster Storage 3.5 and above. Existing deployments that upgrade to Red Hat Gluster Storage 3.5 remain supported, but users are encouraged to migrate to NFS-Ganesha, which provides enhanced functionality, additional security features, and performance improvements detailed in Section 2.1, "What's New in this Release?" . Remote Direct Memory Access (RDMA) Using RDMA as a transport protocol is considered deprecated in Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support new deployments on Red Hat Gluster Storage 3.5. Existing deployments that upgrade to Red Hat Gluster Storage 3.5 remain supported. Tiering Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments on Red Hat Gluster Storage 3.5. Existing deployments that upgrade to Red Hat Gluster Storage 3.5 remain supported. Red Hat Enterprise Linux 6 (RHEL 6) based ISO image for RHGS server Starting with Red Hat Gluster Storage 3.5 release, Red Hat Enterprise Linux 6 based ISO image for Red Hat Gluster Storage server will no longer be included. Parallel NFS (pNFS) As of Red Hat Gluster Storage 3.4, Parallel NFS is considered unsupported and is no longer available as a Technology Preview. Several long-term issues with this feature that affect stability remain unresolved upstream. Information about using this feature has been removed from Red Hat Gluster Storage 3.4 but remains available in the documentation for releases that provided Parallel NFS as a Technology Preview. Parallel NFS (pNFS) As of Red Hat Gluster Storage 3.4, Parallel NFS is considered unsupported and is no longer available as a Technology Preview. Several long-term issues with this feature that affect stability remain unresolved upstream. Information about using this feature has been removed from Red Hat Gluster Storage 3.4 but remains available in the documentation for releases that provided Parallel NFS as a Technology Preview.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/3.5_release_notes/deprecations
30.4. ConsistentHashFactories
30.4. ConsistentHashFactories Red Hat JBoss Data Grid offers a pluggable mechanism for selecting the consistent hashing algorithm. It is shipped with four implementations but a custom implementation can also be used. JBoss Data Grid ships with four ConsistentHashFactory implementations: DefaultConsistentHashFactory - keeps segments balanced evenly across all the nodes, however the key mapping is not guaranteed to be same across caches,as this depends on the history of each cache. If no consistentHashFactory is specified this is the class that will be used. SyncConsistentHashFactory - guarantees that the key mapping is the same for each cache, provided the current membership is the same. This has a drawback in that a node joining the cache can cause the existing nodes to also exchange segments, resulting in either additional state transfer traffic, the distribution of the data becoming less even, or both. TopologyAwareConsistentHashFactory - equivalent of DefaultConsistentHashFactory , but automatically selected when the configuration includes server hinting. TopologyAwareSyncConsistentHashFactory - equivalent of SyncConsistentHashFactory , but automatically selected when the configuration includes server hinting. The consistent hash implementation can be selected via the hash configuration: This configuration guarantees caches with the same members have the same consistent hash, and if the machineId , rackId , or siteId attributes are specified in the transport configuration it also spreads backup copies across physical machines/racks/data centers. It has a potential drawback in that it can move a greater number of segments than necessary during re-balancing. This can be mitigated by using a larger number of segments. Another potential drawback is that the segments are not distributed as evenly as possible, and actually using a very large number of segments can make the distribution of segments worse. Despite the above potential drawbacks the SyncConsistentHashFactory and TopologyAwareSyncConsistentHashFactory both tend to reduce overhead in clustered environments, as neither of these calculate the hash based on the order that nodes have joined the cluster. In addition, both of these classes are typically faster than the default algorithms as both of these classes allow larger differences in the number of segments allocated to each node. Report a bug 30.4.1. Implementing a ConsistentHashFactory A custom ConsistentHashFactory must implement the org.infinispan.distribution.ch.ConsistenHashFactory interface with the following methods (all of which return an implementation of org.infinispan.distribution.ch.ConsistentHash ): Example 30.1. ConsistentHashFactory Methods Currently it is not possible to pass custom parameters to ConsistentHashFactory implementations. Report a bug
[ "<hash consistentHashFactory=\"org.infinispan.distribution.ch.SyncConsistentHashFactory\"/>", "create(Hash hashFunction, int numOwners, int numSegments, List<Address> members,Map<Address, Float> capacityFactors) updateMembers(ConsistentHash baseCH, List<Address> newMembers, Map<Address, Float> capacityFactors) rebalance(ConsistentHash baseCH) union(ConsistentHash ch1, ConsistentHash ch2)" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-consistenthashfactories
Chapter 14. Backup and restore
Chapter 14. Backup and restore 14.1. Backup and restore by using VM snapshots You can back up and restore virtual machines (VMs) by using snapshots. Snapshots are supported by the following storage providers: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API Online snapshots have a default time deadline of five minutes ( 5m ) that can be changed, if needed. Important Online snapshots are supported for virtual machines that have hot plugged virtual disks. However, hot plugged disks that are not in the virtual machine specification are not included in the snapshot. To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent if it is not included with your operating system. The QEMU guest agent is included with the default Red Hat templates. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 14.1.1. About snapshots A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a development version. A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state). When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken. The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. You can perform the following snapshot actions: Create a new snapshot Create a copy of a virtual machine from a snapshot List all snapshots attached to a specific VM Restore a VM from a snapshot Delete an existing VM snapshot VM snapshot controller and custom resources The VM snapshot feature introduces three new API objects defined as custom resource definitions (CRDs) for managing snapshots: VirtualMachineSnapshot : Represents a user request to create a snapshot. It contains information about the current state of the VM. VirtualMachineSnapshotContent : Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. VirtualMachineRestore : Represents a user request to restore a VM from a snapshot. The VM snapshot controller binds a VirtualMachineSnapshotContent object with the VirtualMachineSnapshot object for which it was created, with a one-to-one mapping. 14.1.2. Creating snapshots You can create snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line. 14.1.2.1. Creating a snapshot by using the web console You can create a snapshot of a virtual machine (VM) by using the OpenShift Container Platform web console. The VM snapshot includes disks that meet the following requirements: Either a data volume or a persistent volume claim Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. If the VM is running, click the options menu and select Stop to power it down. Click the Snapshots tab and then click Take Snapshot . Enter the snapshot name. Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot. If your VM has disks that cannot be included in the snapshot and you wish to proceed, select I am aware of this warning and wish to proceed . Click Save . 14.1.2.2. Creating a snapshot by using the command line You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot object. Prerequisites Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots. Install the OpenShift CLI ( oc ). Optional: Power down the VM for which you want to create a snapshot. Procedure Create a YAML file to define a VirtualMachineSnapshot object that specifies the name of the new VirtualMachineSnapshot and the name of the source VM as in the following example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> Create the VirtualMachineSnapshot object: USD oc create -f <snapshot_name>.yaml The snapshot controller creates a VirtualMachineSnapshotContent object, binds it to the VirtualMachineSnapshot , and updates the status and readyToUse fields of the VirtualMachineSnapshot object. Optional: If you are taking an online snapshot, you can use the wait command and monitor the status of the snapshot: Enter the following command: USD oc wait <vm_name> <snapshot_name> --for condition=Ready Verify the status of the snapshot: InProgress - The online snapshot operation is still in progress. Succeeded - The online snapshot operation completed successfully. Failed - The online snapshot operaton failed. Note Online snapshots have a default time deadline of five minutes ( 5m ). If the snapshot does not complete successfully in five minutes, the status is set to failed . Afterwards, the file system will be thawed and the VM unfrozen but the status remains failed until you delete the failed snapshot image. To change the default time deadline, add the FailureDeadline attribute to the VM snapshot spec with the time designated in minutes ( m ) or in seconds ( s ) that you want to specify before the snapshot operation times out. To set no deadline, you can specify 0 , though this is generally not recommended, as it can result in an unresponsive VM. If you do not specify a unit of time such as m or s , the default is seconds ( s ). Verification Verify that the VirtualMachineSnapshot object is created and bound with VirtualMachineSnapshotContent and that the readyToUse flag is set to true : USD oc describe vmsnapshot <snapshot_name> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False" 1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True" 2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 1 The status field of the Progressing condition specifies if the snapshot is still being created. 2 The status field of the Ready condition specifies if the snapshot creation process is complete. 3 Specifies if the snapshot is ready to be used. 4 Specifies that the snapshot is bound to a VirtualMachineSnapshotContent object created by the snapshot controller. Check the spec:volumeBackups property of the VirtualMachineSnapshotContent resource to verify that the expected PVCs are included in the snapshot. 14.1.3. Verifying online snapshots by using snapshot indications Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation. Prerequisites You must have attempted to create an online VM snapshot. Procedure Display the output from the snapshot indications by performing one of the following actions: Use the command line to view indicator output in the status stanza of the VirtualMachineSnapshot object YAML. In the web console, click VirtualMachineSnapshot Status in the Snapshot details screen. Verify the status of your online VM snapshot by viewing the values of the status.indications parameter: Online indicates that the VM was running during online snapshot creation. GuestAgent indicates that the QEMU guest agent was running during online snapshot creation. NoGuestAgent indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error. 14.1.4. Restoring virtual machines from snapshots You can restore virtual machines (VMs) from snapshots by using the OpenShift Container Platform web console or the command line. 14.1.4.1. Restoring a VM from a snapshot by using the web console You can restore a virtual machine (VM) to a configuration represented by a snapshot in the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. If the VM is running, click the options menu and select Stop to power it down. Click the Snapshots tab to view a list of snapshots associated with the VM. Select a snapshot to open the Snapshot Details screen. Click the options menu and select Restore VirtualMachineSnapshot . Click Restore . 14.1.4.2. Restoring a VM from a snapshot by using the command line You can restore an existing virtual machine (VM) to a configuration by using the command line. You can only restore from an offline VM snapshot. Prerequisites Power down the VM you want to restore. Procedure Create a YAML file to define a VirtualMachineRestore object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source as in the following example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name> Create the VirtualMachineRestore object: USD oc create -f <vm_restore>.yaml The snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. Verification Verify that the VM is restored to the state represented by the snapshot and that the complete flag is set to true : USD oc get vmrestore <vm_restore> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" 2 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1 1 Specifies if the process of restoring the VM to the state represented by the snapshot is complete. 2 The status field of the Progressing condition specifies if the VM is still being restored. 3 The status field of the Ready condition specifies if the VM restoration process is complete. 14.1.5. Deleting snapshots You can delete snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line. 14.1.5.1. Deleting a snapshot by using the web console You can delete an existing virtual machine (VM) snapshot by using the web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click the Snapshots tab to view a list of snapshots associated with the VM. Click the options menu beside a snapshot and select Delete VirtualMachineSnapshot . Click Delete . 14.1.5.2. Deleting a virtual machine snapshot in the CLI You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Delete the VirtualMachineSnapshot object: USD oc delete vmsnapshot <snapshot_name> The snapshot controller deletes the VirtualMachineSnapshot along with the associated VirtualMachineSnapshotContent object. Verification Verify that the snapshot is deleted and no longer attached to this VM: USD oc get vmsnapshot 14.1.6. Additional resources CSI Volume Snapshots 14.2. Backing up and restoring virtual machines Important Red Hat supports using OpenShift Virtualization 4.14 or later with OADP 1.3.x or later. OADP versions earlier than 1.3.0 are not supported for back up and restore of OpenShift Virtualization. Back up and restore virtual machines by using the OpenShift API for Data Protection . You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. You can then install the Data Protection Application. Note OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options: Container Storage Interface (CSI) backups Container Storage Interface (CSI) backups with DataMover The following storage options are excluded: File system backup and restore Volume snapshot backup and restore For more information, see Backing up applications with File System Backup: Kopia or Restic . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 14.2.1. Installing and configuring OADP with OpenShift Virtualization As a cluster administrator, you install OADP by installing the OADP Operator. The latest version of the OADP Operator installs Velero 1.14 . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins. Back up virtual machines by creating a Backup custom resource (CR). Warning Red Hat support is limited to only the following options: CSI backups CSI backups with DataMover. You restore the Backup CR by creating a Restore CR. Additional resources OADP plugins Backup custom resource (CR) Restore CR Using Operator Lifecycle Manager on restricted networks 14.2.2. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The kubevirt plugin is mandatory for OpenShift Virtualization. 3 Specify the plugin for the backup provider, for example, gcp , if it exists. 4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 14.3. Disaster recovery OpenShift Virtualization supports using disaster recovery (DR) solutions to ensure that your environment can recover after a site outage. To use these methods, you must plan your OpenShift Virtualization deployment in advance. 14.3.1. About disaster recovery methods For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the Red Hat OpenShift Virtualization disaster recovery guide in the Red Hat Knowledgebase. The two primary DR methods for OpenShift Virtualization are Metropolitan Disaster Recovery (Metro-DR) and Regional-DR. Metro-DR Metro-DR uses synchronous replication. It writes to storage at both the primary and secondary sites so that the data is always synchronized between sites. Because the storage provider is responsible for ensuring that the synchronization succeeds, the environment must meet the throughput and latency requirements of the storage provider. Regional-DR Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites. 14.3.1.1. Metro-DR for Red Hat OpenShift Data Foundation OpenShift Virtualization supports the Metro-DR solution for OpenShift Data Foundation , which provides two-way synchronous data replication between managed OpenShift Virtualization clusters installed on primary and secondary sites. This solution combines Red Hat Advanced Cluster Management (RHACM), Red Hat Ceph Storage, and OpenShift Data Foundation components. Use this solution during a site disaster to fail applications from the primary to the secondary site, and to relocate the application back to the primary site after restoring the disaster site. This synchronous solution is only available to metropolitan distance data centers with a 10 millisecond latency or less. For more information about using the Metro-DR solution for OpenShift Data Foundation with OpenShift Virtualization, see the Red Hat Knowledgebase .
[ "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>", "oc create -f <snapshot_name>.yaml", "oc wait <vm_name> <snapshot_name> --for condition=Ready", "oc describe vmsnapshot <snapshot_name>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>", "oc create -f <vm_restore>.yaml", "oc get vmrestore <vm_restore>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1", "oc delete vmsnapshot <snapshot_name>", "oc get vmsnapshot", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'", "{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/virtualization/backup-and-restore
Chapter 4. Enabling substitutable variables across the order process workflow
Chapter 4. Enabling substitutable variables across the order process workflow You can substitute variables across products used in your order process workflow. Implementing substitutable variables requires creating or editing the job template survey on Ansible Tower and enabling substitution on surveys attached to products on Automation Services Catalog. Substitution format and requirements Substitutable values are expressed in the format {{substitution_express}} with no spaces allowed between the braces and the expression. Refer to Substitute variables for information on each variable you can use for substitution. 4.1. Implementing substitutable variables in your order process workflow In this section we will demonstrate how to align data you expose to Automation Services Catalog in your playbooks with substitutable variables, using the playbooks examples provided earlier in this guide. 4.1.1. Creating a before order process to pass a value to the product order. Example playbook before_order.yml for before order process This playbook returns to Automation Services Catalog a playbook artifact favorite_color that has the value orange . You do not need to make a job template survey on Ansible Tower since no user input is required. Instead, you can pass favorite_color to the product playbook by editing the survey on Automation Services Catalog set to the before order process product. 4.1.2. Configuring the product order to use a substituted variable This example product playbook accepts a substituted variable for {{favorite_color}} and passes back to Automation Services Catalog the artifact 'after_data'. Example playbook product_order.yml Create a job template on Ansible Tower for the 'product_order.yml' playbook that sets favorite_color as the *Answer Variable Name. Create a survey for the product_order.yml job template that includes: Enter What is your favorite color? in the Prompt field. Enter favorite_color in the Answer Variable Name field. Click UPDATE . Enable the product order survey on Automation Services Catalog to use the 'favorite_color' value passed from before_order.yml . In the survey attached to your product order : Update fields with the following values: Name : favorite_color Initial Value : {{before.before_order.artifacts.favorite_color}} Label : What is your favorite color? Placeholder : {{before.before_order.artifacts.favorite_color}} Toggle the Disabled switch to the active position. Toggle the Substitution switch to the active position. You have now configured the product order to use the favorite_color artifact from the before_order.yml playbook as a substituted variable. 4.1.3. Configuring the after order product to accept after_data artifact values from the product order Use surveys to pass the 'product_order.yml' artifact after_data to after_order.yml using substitutable variables. Example playbook after_order.yml for after order product Create a job template on Ansible Tower for the 'after_order.yml' playbook that sets after_data as the *Answer Variable Name. In the survey attached to your after_order.yml job template: Type "Enter your after data" in the Prompt field. Enter after_data in the Answer Variable Name field. Click UPDATE . Enable the after order survey on Automation Services Catalog to use the 'after_data' value passed from after_order.yml . In the survey attached to your after order : Update fields with the following values: Name : after_data Initial Value : {{product.artifacts.after_data}} Label : "Enter your after data" Placeholder : {{product.artifacts.after_data}} Toggle the Disabled switch to the active position. Toggle the Substitution switch to the active position. The playbook will display the value passed for after data when the job template is run. 4.2. Substitute variables reference A substitution has the format of {{substitution_express}} . Do not use a space between the curly brackets and the expression. Substituted values are set in Ansible Tower jobs template surveys. Substituted variables pass from one product in the order process to another, depending on your configuration. See Surveys in the Ansible Tower User Guide for more information on updating job template surveys. Follow the table below when including substitutable variables in your surveys. Table 4.1. Object model reference table Object Expression Notes Order order.order_id order.created_at order.ordered_by.name order.ordered_by.email order.approval.updated_at order.approval.decision order.approval.reason order.created_at` is the date and time when the order started. Date format: 2007-02-10T20:30:45Z Example: {{order.approval.reason}} Before order before.<order_process_name>.artifacts.<fact_name> before.<order_process_name>.parameters.<parameter_name> before.<order_process_name>.status No quotes are needed for the order_process_name if it contains space. Example: {{before.Sample Order.artifacts.ticket_number}} Product product.name product.description product.long_description product.help_url product.support_url product.vendor product.platform product.portfolio.name product.portfolio.description product.artifacts.<fact_name> product.parameters.<parameter_name> product.status Example: {{product.artifacts.after_data}} After order after.<order_process_name>.artifacts.<fact_name> after.<order_process_name>.parameters.<parameter_name> after.<order_process_name>.status No quotes are needed for the order_process_name if it contains space. Example: {{after.SampleOrder.status}}
[ "This playbook prints a simple debug message and set_stats - name: Echo Hello, world! hosts: localhost gather_facts: yes tasks: - debug: msg: \"Hello, world!\" - set_stats: data: expose_to_cloud_redhat_com_favorite_color: \"orange\"", "This playbook prints a simple debug message with given information - name: Echo Hello, world! hosts: localhost gather_facts: yes tasks: - debug: msg: \"Hello, {{favorite_color}} world!\" - set_stats: data: expose_to_cloud_redhat_com_after_data: \"{{favorite_color}}\"", "This playbook prints a simple debug message with given information - name: Echo Hello, world! hosts: localhost gather_facts: yes tasks: - debug: msg: \"{{after_data}}\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/integrating_automation_services_catalog_with_your_it_service_management_itsm_systems/assembly-itsm-substitution
Chapter 4. About Logging
Chapter 4. About Logging As a cluster administrator, you can deploy logging on an Red Hat OpenShift Service on AWS cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. You can forward logs to your chosen log outputs, including on-cluster, Red Hat managed log storage. You can also visualize your log data in the Red Hat OpenShift Service on AWS web console, or the Kibana web console, depending on your deployed log storage solution. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. Red Hat OpenShift Service on AWS cluster administrators can deploy logging by using Operators. For information, see xref :../../observability/logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing logging]. The Operators are responsible for deploying, upgrading, and maintaining logging. After the Operators are installed, you can create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support logging. You can also create a ClusterLogForwarder CR to specify which logs are collected, how they are transformed, and where they are forwarded to. Note Because the internal Red Hat OpenShift Service on AWS Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref :../../observability/logging/log_storage/logging-config-es-store.adoc#cluster-logging-elasticsearch-audit_logging-config-es-store[Forward audit logs to the log store]. 4.1. Logging architecture The major components of the logging are: Collector The collector is a daemonset that deploys pods to each Red Hat OpenShift Service on AWS node. It collects log data from each node, transforms the data, and forwards it to configured outputs. You can use the Vector collector or the legacy Fluentd collector. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Log store The log store stores log data for analysis and is the default output for the log forwarder. You can use the default LokiStack log store, the legacy Elasticsearch log store, or forward logs to additional external log stores. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Visualization You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The Red Hat OpenShift Service on AWS web console UI is provided by enabling the Red Hat OpenShift Service on AWS console plugin. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. Logging collects container logs and node logs. These are categorized into types: Application logs Container logs generated by user applications running in the cluster, except infrastructure container applications. Infrastructure logs Container logs generated by infrastructure namespaces: openshift* , kube* , or default , as well as journald messages from nodes. Audit logs Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and logs from the auditd , kube-apiserver , openshift-apiserver services, as well as the ovn project if enabled. Additional resources xref :../../observability/logging/log_visualization/log-visualization-ocp-console.adoc#log-visualization-ocp-console[Log visualization with the web console] 4.2. About deploying logging Administrators can deploy the logging by using the Red Hat OpenShift Service on AWS web console or the OpenShift CLI ( oc ) to install the logging Operators. The Operators are responsible for deploying, upgrading, and maintaining the logging. Administrators and application developers can view the logs of the projects for which they have view access. 4.2.1. Logging custom resources You can configure your logging deployment with custom resource (CR) YAML files implemented by each Operator. Red Hat OpenShift Logging Operator : ClusterLogging (CL) - After the Operators are installed, you create a ClusterLogging custom resource (CR) to schedule logging pods and other resources necessary to support the logging. The ClusterLogging CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches the ClusterLogging CR and adjusts the logging deployment accordingly. ClusterLogForwarder (CLF) - Generates collector configuration to forward logs per user configuration. Loki Operator : LokiStack - Controls the Loki cluster as log store and the web proxy with Red Hat OpenShift Service on AWS authentication integration to enforce multi-tenancy. OpenShift Elasticsearch Operator : Note These CRs are generated and managed by the OpenShift Elasticsearch Operator. Manual changes cannot be made without being overwritten by the Operator. ElasticSearch - Configure and deploy an Elasticsearch instance as the default log store. Kibana - Configure and deploy Kibana instance to search, query and view logs. 4.2.2. Logging requirements Hosting your own logging stack requires a large amount of compute resources and storage, which might be dependent on your cloud service quota. The compute resource requirements can start at 48 GB or more, while the storage requirement can be as large as 1600 GB or more. The logging stack runs on your worker nodes, which reduces your available workload resource. With these considerations, hosting your own logging stack increases your cluster operating costs. For information, see xref :../../observability/logging/log_collection_forwarding/log-forwarding.adoc#about-log-collection_log-forwarding[About log collection and forwarding]. 4.2.3. About JSON Red Hat OpenShift Service on AWS Logging You can use JSON logging to configure the Log Forwarding API to parse JSON strings into a structured object. You can perform the following tasks: Parse JSON logs Configure JSON log data for Elasticsearch Forward JSON logs to the Elasticsearch log store 4.2.4. About collecting and storing Kubernetes events The Red Hat OpenShift Service on AWS Event Router is a pod that watches Kubernetes events and logs them for collection by Red Hat OpenShift Service on AWS Logging. You must manually deploy the Event Router. For information, see xref :../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[About collecting and storing Kubernetes events]. 4.2.5. About troubleshooting Red Hat OpenShift Service on AWS Logging You can troubleshoot the logging issues by performing the following tasks: Viewing logging status Viewing the status of the log store Understanding logging alerts Collecting logging data for Red Hat Support Troubleshooting for critical alerts 4.2.6. About exporting fields The logging system exports fields. Exported fields are present in the log records and are available for searching from Elasticsearch and Kibana. For information, see xref :../../observability/logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields[About exporting fields]. 4.2.7. About event routing The Event Router is a pod that watches Red Hat OpenShift Service on AWS events so they can be collected by logging. The Event Router collects events from all projects and writes them to STDOUT . Fluentd collects those events and forwards them into the Red Hat OpenShift Service on AWS Elasticsearch instance. Elasticsearch indexes the events to the infra index. You must manually deploy the Event Router. For information, see xref :../../observability/logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[Collecting and storing Kubernetes events].
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/cluster-logging
Chapter 18. Jakarta Batch Application Development
Chapter 18. Jakarta Batch Application Development JBoss EAP supports Jakarta Batch, defined in the Jakarta EE specification. For information about Jakarta Batch, see Jakarta Batch . The batch-jberet subsystem in JBoss EAP facilitates batch configuration and monitoring. To configure your application to use batch processing on JBoss EAP, you must specify the required dependencies . Additional JBoss EAP features for batch processing include Job Specification Language (JSL) inheritance , and batch property injections . 18.1. Required Batch Dependencies To deploy your batch application to JBoss EAP, some additional dependencies that are required for batch processing need to be declared in your application's pom.xml . An example of these required dependencies is shown below. Most of the dependencies have the scope set to provided , as they are already included in JBoss EAP. Example: pom.xml Batch Dependencies <dependencies> <dependency> <groupId>org.jboss.spec.javax.batch</groupId> <artifactId>jboss-batch-api_1.0_spec</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>javax.enterprise</groupId> <artifactId>cdi-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.annotation</groupId> <artifactId>jboss-annotations-api_1.2_spec</artifactId> <scope>provided</scope> </dependency> <!-- Include your application's other dependencies. --> ... </dependencies> 18.2. Job Specification Language (JSL) Inheritance A feature of the JBoss EAP batch-jberet subsystem is the ability to use Job Specification Language (JSL) inheritance to abstract out some common parts of your job definition. Inherit Step and Flow Within the Same Job XML File Parent elements, for example step and flow, are marked with the attribute abstract="true" to exclude them from direct execution. Child elements contain a parent attribute, which points to the parent element. <job id="inheritance" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <!-- abstract step and flow --> <step id="step0" abstract="true"> <batchlet ref="batchlet0"/> </step> <flow id="flow0" abstract="true"> <step id="flow0.step1" parent="step0"/> </flow> <!-- concrete step and flow --> <step id="step1" parent="step0" ="flow1"/> <flow id="flow1" parent="flow0"/> </job> Inherit a Step from a Different Job XML File Child elements, for example step and job, contain: A jsl-name attribute, which specifies the job XML file name, without the .xml extension, containing the parent element. A parent attribute, which points to the parent element in the job XML file specified by jsl-name . Parent elements are marked with the attribute abstract="true" to exclude them from direct execution. Example: chunk-child.xml <job id="chunk-child" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <step id="chunk-child-step" parent="chunk-parent-step" jsl-name="chunk-parent"> </step> </job> Example: chunk-parent.xml <job id="chunk-parent" > <step id="chunk-parent-step" abstract="true"> <chunk checkpoint-policy="item" skip-limit="5" retry-limit="5"> <reader ref="R1"></reader> <processor ref="P1"></processor> <writer ref="W1"></writer> <checkpoint-algorithm ref="parent"> <properties> <property name="parent" value="parent"></property> </properties> </checkpoint-algorithm> <skippable-exception-classes> <include class="java.lang.Exception"></include> <exclude class="java.io.IOException"></exclude> </skippable-exception-classes> <retryable-exception-classes> <include class="java.lang.Exception"></include> <exclude class="java.io.IOException"></exclude> </retryable-exception-classes> <no-rollback-exception-classes> <include class="java.lang.Exception"></include> <exclude class="java.io.IOException"></exclude> </no-rollback-exception-classes> </chunk> </step> </job> 18.3. Batch Property Injections A feature of the JBoss EAP batch-jberet subsystem is the ability to have properties defined in the job XML file injected into fields in the batch artifact class. Properties defined in the job XML file can be injected into fields using the @Inject and @BatchProperty annotations. The injection field can be any of the following Java types: java.lang.String java.lang.StringBuilder java.lang.StringBuffer any primitive type, and its wrapper type: boolean , Boolean int , Integer double , Double long , Long char , Character float , Float short , Short byte , Byte java.math.BigInteger java.math.BigDecimal java.net.URL java.net.URI java.io.File java.util.jar.JarFile java.util.Date java.lang.Class java.net.Inet4Address java.net.Inet6Address java.util.List , List<?> , List<String> java.util.Set , Set<?> , Set<String> java.util.Map , Map<?, ?> , Map<String, String> , Map<String, ?> java.util.logging.Logger java.util.regex.Pattern javax.management.ObjectName The following array types are also supported: java.lang.String[] any primitive type, and its wrapper type: boolean[] , Boolean[] int[] , Integer[] double[] , Double[] long[] , Long[] char[] , Character[] float[] , Float[] short[] , Short[] byte[] , Byte[] java.math.BigInteger[] java.math.BigDecimal[] java.net.URL[] java.net.URI[] java.io.File[] java.util.jar.JarFile[] java.util.zip.ZipFile[] java.util.Date[] java.lang.Class[] Shown below are a few examples of using batch property injections: Injecting a Number into a Batchlet Class as Various Types Injecting a Number Sequence into a Batchlet Class as Various Arrays Injecting a Class Property into a Batchlet Class Assigning a Default Value to a Field Annotated for Property Injection Injecting a Number into a Batchlet Class as Various Types Example: Job XML File <batchlet ref="myBatchlet"> <properties> <property name="number" value="10"/> </properties> </batchlet> Example: Artifact Class @Named public class MyBatchlet extends AbstractBatchlet { @Inject @BatchProperty int number; // Field name is the same as batch property name. @Inject @BatchProperty (name = "number") // Use the name attribute to locate the batch property. long asLong; // Inject it as a specific data type. @Inject @BatchProperty (name = "number") Double asDouble; @Inject @BatchProperty (name = "number") private String asString; @Inject @BatchProperty (name = "number") BigInteger asBigInteger; @Inject @BatchProperty (name = "number") BigDecimal asBigDecimal; } Injecting a Number Sequence into a Batchlet Class as Various Arrays Example: Job XML File <batchlet ref="myBatchlet"> <properties> <property name="weekDays" value="1,2,3,4,5,6,7"/> </properties> </batchlet> Example: Artifact Class @Named public class MyBatchlet extends AbstractBatchlet { @Inject @BatchProperty int[] weekDays; // Array name is the same as batch property name. @Inject @BatchProperty (name = "weekDays") // Use the name attribute to locate the batch property. Integer[] asIntegers; // Inject it as a specific array type. @Inject @BatchProperty (name = "weekDays") String[] asStrings; @Inject @BatchProperty (name = "weekDays") byte[] asBytes; @Inject @BatchProperty (name = "weekDays") BigInteger[] asBigIntegers; @Inject @BatchProperty (name = "weekDays") BigDecimal[] asBigDecimals; @Inject @BatchProperty (name = "weekDays") List asList; @Inject @BatchProperty (name = "weekDays") List<String> asListString; @Inject @BatchProperty (name = "weekDays") Set asSet; @Inject @BatchProperty (name = "weekDays") Set<String> asSetString; } Injecting a Class Property into a Batchlet Class Example: Job XML File <batchlet ref="myBatchlet"> <properties> <property name="myClass" value="org.jberet.support.io.Person"/> </properties> </batchlet> Example: Artifact Class @Named public class MyBatchlet extends AbstractBatchlet { @Inject @BatchProperty private Class myClass; } Assigning a Default Value to a Field Annotated for Property Injection You can assign a default value to a field in an artifact Java class in the case where the target batch property is not defined in the job XML file. If the target property is resolved to a valid value, it is injected into that field; otherwise, no value is injected and the default field value is used. Example: Artifact Class /** Comment character. If commentChar batch property is not specified in job XML file, use the default value '#'. */ @Inject @BatchProperty private char commentChar = '#';
[ "<dependencies> <dependency> <groupId>org.jboss.spec.javax.batch</groupId> <artifactId>jboss-batch-api_1.0_spec</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>javax.enterprise</groupId> <artifactId>cdi-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.annotation</groupId> <artifactId>jboss-annotations-api_1.2_spec</artifactId> <scope>provided</scope> </dependency> <!-- Include your application's other dependencies. --> </dependencies>", "<job id=\"inheritance\" xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\" version=\"1.0\"> <!-- abstract step and flow --> <step id=\"step0\" abstract=\"true\"> <batchlet ref=\"batchlet0\"/> </step> <flow id=\"flow0\" abstract=\"true\"> <step id=\"flow0.step1\" parent=\"step0\"/> </flow> <!-- concrete step and flow --> <step id=\"step1\" parent=\"step0\" next=\"flow1\"/> <flow id=\"flow1\" parent=\"flow0\"/> </job>", "<job id=\"chunk-child\" xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\" version=\"1.0\"> <step id=\"chunk-child-step\" parent=\"chunk-parent-step\" jsl-name=\"chunk-parent\"> </step> </job>", "<job id=\"chunk-parent\" > <step id=\"chunk-parent-step\" abstract=\"true\"> <chunk checkpoint-policy=\"item\" skip-limit=\"5\" retry-limit=\"5\"> <reader ref=\"R1\"></reader> <processor ref=\"P1\"></processor> <writer ref=\"W1\"></writer> <checkpoint-algorithm ref=\"parent\"> <properties> <property name=\"parent\" value=\"parent\"></property> </properties> </checkpoint-algorithm> <skippable-exception-classes> <include class=\"java.lang.Exception\"></include> <exclude class=\"java.io.IOException\"></exclude> </skippable-exception-classes> <retryable-exception-classes> <include class=\"java.lang.Exception\"></include> <exclude class=\"java.io.IOException\"></exclude> </retryable-exception-classes> <no-rollback-exception-classes> <include class=\"java.lang.Exception\"></include> <exclude class=\"java.io.IOException\"></exclude> </no-rollback-exception-classes> </chunk> </step> </job>", "<batchlet ref=\"myBatchlet\"> <properties> <property name=\"number\" value=\"10\"/> </properties> </batchlet>", "@Named public class MyBatchlet extends AbstractBatchlet { @Inject @BatchProperty int number; // Field name is the same as batch property name. @Inject @BatchProperty (name = \"number\") // Use the name attribute to locate the batch property. long asLong; // Inject it as a specific data type. @Inject @BatchProperty (name = \"number\") Double asDouble; @Inject @BatchProperty (name = \"number\") private String asString; @Inject @BatchProperty (name = \"number\") BigInteger asBigInteger; @Inject @BatchProperty (name = \"number\") BigDecimal asBigDecimal; }", "<batchlet ref=\"myBatchlet\"> <properties> <property name=\"weekDays\" value=\"1,2,3,4,5,6,7\"/> </properties> </batchlet>", "@Named public class MyBatchlet extends AbstractBatchlet { @Inject @BatchProperty int[] weekDays; // Array name is the same as batch property name. @Inject @BatchProperty (name = \"weekDays\") // Use the name attribute to locate the batch property. Integer[] asIntegers; // Inject it as a specific array type. @Inject @BatchProperty (name = \"weekDays\") String[] asStrings; @Inject @BatchProperty (name = \"weekDays\") byte[] asBytes; @Inject @BatchProperty (name = \"weekDays\") BigInteger[] asBigIntegers; @Inject @BatchProperty (name = \"weekDays\") BigDecimal[] asBigDecimals; @Inject @BatchProperty (name = \"weekDays\") List asList; @Inject @BatchProperty (name = \"weekDays\") List<String> asListString; @Inject @BatchProperty (name = \"weekDays\") Set asSet; @Inject @BatchProperty (name = \"weekDays\") Set<String> asSetString; }", "<batchlet ref=\"myBatchlet\"> <properties> <property name=\"myClass\" value=\"org.jberet.support.io.Person\"/> </properties> </batchlet>", "@Named public class MyBatchlet extends AbstractBatchlet { @Inject @BatchProperty private Class myClass; }", "/** Comment character. If commentChar batch property is not specified in job XML file, use the default value '#'. */ @Inject @BatchProperty private char commentChar = '#';" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/java_batch_application_development
Chapter 1. Red Hat Software Collections 3.8
Chapter 1. Red Hat Software Collections 3.8 This chapter serves as an overview of the Red Hat Software Collections 3.8 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.8 is available for Red Hat Enterprise Linux 7. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. Note In Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9, similar components are provided as Application Streams . All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections Components" lists components that are supported at the time of the Red Hat Software Collections 3.8 release. All Software Collections are currently supported only on Red Hat Enterprise Linux 7. Table 1.1. Red Hat Software Collections Components Component Software Collection Description Red Hat Developer Toolset 12.1 devtoolset-12 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Red Hat Developer Toolset 12.1 devtoolset-12 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.30.1 rh-perl530 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl530 Software Collection provides additional utilities, scripts, and database connectors for MySQL , PostgreSQL , and SQLite . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules, the LWP::UserAgent module for communicating with the HTTP servers, and the LWP::Protocol::https module for securing the communication. The rh-perl530 packaging is aligned with upstream; the perl530-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.3.33 rh-php73 A release of PHP 7.3 with PEAR 1.10.9, APCu 5.1.17, and the Xdebug extension. Python 3.8.14 rh-python38 The rh-python38 Software Collection contains Python 3.8, which introduces new Python modules, such as contextvars , dataclasses , or importlib.resources , new language features, improved developer experience, and performance improvements . In addition, a set of popular extension libraries is provided, including mod_wsgi (supported only together with the httpd24 Software Collection), numpy , scipy , and the psycopg2 PostgreSQL database connector. Ruby 3.0.4 rh-ruby30 A release of Ruby 3.0. This version provides multiple performance improvements and new features, such as Ractor , Fiber Scheduler and the RBS language . Ruby 3.0 maintains source-level backward compatibility with Ruby 2.7. MariaDB 10.5.16 rh-mariadb105 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version includes various new features, MariaDB Galera Cluster upgraded to version 4, and PAM plug-in version 2.0 . MySQL 8.0.32 rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 10.23 rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . PostgreSQL 12.11 rh-postgresql12 A release of PostgreSQL, which provides the pgaudit extension, various enhancements to partitioning and parallelism, support for the SQL/JSON path language, and performance improvements. PostgreSQL 13.7 rh-postgresql13 A release of PostgreSQL, which enables improved query planning and introduces various performance improvements and two new packages, pg_repack and plpython3 . Node.js 14.21.3 rh-nodejs14 A release of Node.js with V8 version 8.3, a new experimental WebAssembly System Interface (WASI), and a new experimental Async Local Storage API. nginx 1.20.1 rh-nginx120 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version supports client SSL certificate validation with OCSP and improves support for HTTP/2 . Apache httpd 2.4.34 httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb , mod_auth_mellon , and ModSecurity modules are also included. Varnish Cache 6.0.8 rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.6.1 rh-maven36 A release of Maven, a software project management and comprehension tool. This release provides various enhancements and bug fixes. Git 2.27.0 rh-git227 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version introduces numerous enhancements; for example, the git checkout command split into git switch and git restore , and changed behavior of the git rebase command . In addition, Git Large File Storage (LFS) has been updated to version 2.11.0. Redis 6.0.16 rh-redis6 A release of Redis 6.0, a persistent key-value database . Redis now supports SSL on all channels, Access Control List, and REdis Serialization Protocol version 3 . HAProxy 1.8.24 rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. JDK Mission Control 8.0.1 rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 11 or later to run. Target Java applications must run with at least OpenJDK version 8 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven36 Software Collection. Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Red Hat Developer Toolset 12 Release Red Hat Developer Toolset 12.1 devtoolset-12 RHEL7 x86_64, s390x, ppc64, ppc64le Components New in Red Hat Software Collections 3.8 Red Hat Developer Toolset 11.0 devtoolset-11 RHEL7 x86_64, s390x, ppc64, ppc64le nginx 1.20.1 rh-nginx120 RHEL7 x86_64, s390x, ppc64le Redis 6.0.16 rh-redis6 RHEL7 x86_64, s390x, ppc64le Components Updated in Red Hat Software Collections 3.8 JDK Mission Control 8.0.1 rh-jmc RHEL7 x86_64 Components Last Updated in Red Hat Software Collections 3.7 Red Hat Developer Toolset 10.1 devtoolset-10 * RHEL7 x86_64, s390x, ppc64, ppc64le MariaDB 10.5.16 rh-mariadb105 RHEL7 x86_64, s390x, ppc64le PostgreSQL 13.7 rh-postgresql13 RHEL7 x86_64, s390x, ppc64le Ruby 3.0.4 rh-ruby30 RHEL7 x86_64, s390x, ppc64le Ruby 2.7.8 rh-ruby27 * RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.6.10 rh-ruby26 * RHEL7 x86_64, s390x, aarch64, ppc64le Components Last Updated in Red Hat Software Collections 3.6 Git 2.27.0 rh-git227 RHEL7 x86_64, s390x, ppc64le nginx 1.18.0 rh-nginx118 * RHEL7 x86_64, s390x, ppc64le Node.js 14.21.3 rh-nodejs14 RHEL7 x86_64, s390x, ppc64le Apache httpd 2.4.34 httpd24 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.3.33 rh-php73 RHEL7 x86_64, s390x, aarch64, ppc64le HAProxy 1.8.24 rh-haproxy18 RHEL7 x86_64 Perl 5.30.1 rh-perl530 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.9 rh-ruby25 * RHEL7 x86_64, s390x, aarch64, ppc64le Components Last Updated in Red Hat Software Collections 3.5 Red Hat Developer Toolset 9.1 devtoolset-9 * RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Python 3.8.14 rh-python38 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 6.0.8 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Apache httpd 2.4.34 (the last update for RHEL6) httpd24 (RHEL6)* RHEL6 x86_64 Components Last Updated in Red Hat Software Collections 3.4 Node.js 12.22.12 rh-nodejs12 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.16.1 rh-nginx116 * RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 12.11 rh-postgresql12 RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.6.1 rh-maven36 RHEL7 x86_64, s390x, aarch64, ppc64le Components Last Updated in Red Hat Software Collections 3.3 Red Hat Developer Toolset 8.1 devtoolset-8 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le MariaDB 10.3.35 rh-mariadb103 * RHEL7 x86_64, s390x, aarch64, ppc64le Redis 5.0.5 rh-redis5 * RHEL7 x86_64, s390x, aarch64, ppc64le Components Last Updated in Red Hat Software Collections 3.2 PHP 7.2.24 rh-php72 * RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.32 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.21.0 rh-nodejs10 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.1 rh-nginx114 * RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.4 rh-git218 * RHEL7 x86_64, s390x, aarch64, ppc64le Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.3 rh-perl526 * RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 * RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 * RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.23 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.0.27 rh-php70 * RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 * RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.12 rh-python36 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 * RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.22 rh-mariadb102 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.19 rh-postgresql96 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.17.0 rh-nodejs8 * RHEL7 x86_64, s390x, aarch64, ppc64le Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 * RHEL7 x86_64 nginx 1.10.2 rh-nginx110 * RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.6 rh-ruby24 * RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 * RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.18 python27 * RHEL6*, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 * RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common * RHEL6, RHEL7 x86_64 Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 * RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 * RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 * RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 * RHEL6, RHEL7 x86_64 Ruby 2.3.8 rh-ruby23 * RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 * RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 * RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD and Intel 64-bit architectures s390x - The 64-bit IBM Z architecture aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.8 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on the following architectures: AMD and Intel 64-bit architectures 64-bit IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.8 adds the following new Software Collections: devtoolset-12 (available since November 2022) - see Section 1.3.2, "Changes in Red Hat Developer Toolset 12" devtoolset-11 - see Section 1.3.3, "Changes in Red Hat Developer Toolset 11" rh-nginx120 - see Section 1.3.4, "Changes in nginx" rh-redis6 - see Section 1.3.5, "Changes in Redis" All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following component has been updated in Red Hat Software Collections 3.8: rh-jmc - see Section 1.3.6, "Changes in JDK Mission Control" httpd24 (asynchronous update) - see Section 1.3.7, "Changes in Apache httpd (asynchronous update)" Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.8: rhscl/devtoolset-12-toolchain-rhel7 (available since November 2022) rhscl/devtoolset-12-perftools-rhel7 (available since November 2022) rhscl/nginx-120-rhel7 rhscl/redis-6-rhel7 For more information about Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset 12 1.3.2.1. Changes in Red Hat Developer Toolset 12.0 The following components have been upgraded in Red Hat Developer Toolset 12.0 compared to the release: GCC to version 12.1.1 annobin to version 10.76 elfutils to version 0.187 GDB to version 11.2 strace to version 5.18 SystemTap to version 4.7 Valgrind to version 3.19.0 Dyninst to version 12.1.0 In addition, a bug fix update is available for binutils . For detailed changes in Red Hat Developer Toolset 12.0, see the Red Hat Developer Toolset User Guide . 1.3.2.2. Changes in Red Hat Developer Toolset 12.1 The following components have been upgraded in Red Hat Developer Toolset 12.1: GCC to version 12.2.1 annobin to version 11.08 In addition, a security update is available for binutils . For more information about Red Hat Developer Toolset 12.1, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in Red Hat Developer Toolset 11 The following components have been upgraded in Red Hat Developer Toolset 12.1 compared to the release: GCC to version 11.2.1 binutils to version 2.36.1 elfutils to version 0.185 dwz to version 0.14 GDB to version 10.2 strace to version 5.13 SystemTap to version 4.5 Valgrind to version 3.17.0 Dyninst to version 11.0.0 make to version 4.3 annobin to version 9.82 For detailed information on changes in 12.1, see the Red Hat Developer Toolset User Guide . 1.3.4. Changes in nginx The new rh-nginx120 Software Collection introduces nginx 1.20.1 , which provides a number of bug and security fixes, new features, and enhancements over version 1.18. New features: nginx now supports client SSL certificate validation with Online Certificate Status Protocol (OCSP). nginx now supports cache clearing based on the minimum amount of free space. This support is implemented as the min_free parameter of the proxy_cache_path directive. A new ngx_stream_set_module module has been added, which enables you to set a value for a variable. Enhanced directives: Multiple new directives are now available, such as ssl_conf_command and ssl_reject_handshake . The proxy_cookie_flags directive now supports variables. Improved support for HTTP/2: The ngx_http_v2 module now includes the lingering_close , lingering_time , lingering_timeout directives. Handling connections in HTTP/2 has been aligned with HTTP/1.x. From nginx 1.20 , use the keepalive_timeout and keepalive_requests directives instead of the removed http2_recv_timeout , http2_idle_timeout , and http2_max_requests directives. For more information regarding changes in nginx , refer to the upstream release notes . For migration instructions, see Section 5.4, "Migrating to nginx 1.20" . 1.3.5. Changes in Redis The new rh-redis6 Software Collection includes Redis 6.0.16 . This version provides multiple enhancements and bug fixes over version 5.0.5 distributed with an earlier Red Hat Software Collections release. Notable changes over Redis 5 include: Redis now supports SSL on all channels. Redis now supports Access Control List (ACL), which defines user permissions for command calls and key pattern access. Redis now supports REdis Serialization Protocol version 3 (RESP3), which returns more semantical replies. Redis can now optionally use threads to handle I/O. Redis now offers server-side support for client-side caching of key values. The Redis active expire cycle has been improved to enable faster eviction of expired keys. For migration and compatibility notes, see Section 5.5, "Migrating to Redis 6" . For detailed changes in Redis , see the upstream release notes . 1.3.6. Changes in JDK Mission Control JDK Mission Control (JMC), provided by the rh-jmc Software Collection, has been upgraded to version 8.0.1, which provides bug and security fixes. 1.3.7. Changes in Apache httpd (asynchronous update) To fix CVE-2022-29404 , the default value for the LimitRequestBody directive in the Apache HTTP Server has been changed from 0 (unlimited) to 1 GiB with the release of the RHSA-2022:6753 advisory. On systems where the value of LimitRequestBody is not explicitly specified in an httpd configuration file, updating the httpd package sets LimitRequestBody to the default value of 1 GiB. As a consequence, if the total size of the HTTP request body exceeds this 1 GiB default limit, httpd returns the 413 Request Entity Too Large error code. If the new default allowed size of an HTTP request message body is insufficient for your use case, update your httpd configuration files within the respective context (server, per-directory, per-file, or per-location) and set your preferred limit in bytes. For example, to set a new 2 GiB limit, use: Systems already configured to use any explicit value for the LimitRequestBody directive are unaffected by this change. 1.4. Compatibility Information Red Hat Software Collections 3.8 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD and Intel 64-bit architectures, 64-bit IBM Z, and IBM POWER, little endian. Certain previously released components are available also for the 64-bit ARM architecture. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues rh-mariadb component The rh-mariadb103 Software Collection provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. The rh-mariadb105 Software Collection provides the plug-in versions 1.0 and 2.0, version 2.0 is the default. The PAM plug-in version 1.0 in MariaDB does not work. To work around this problem, use the PAM plug-in version 2.0 provided by rh-mariadb105 . rh-ruby27 component, BZ# 1836201 When a custom script requires the Psych YAML parser and afterwards uses the Gem.load_yaml method, running the script fails with the following error message: To work around this problem, add the gem 'psych' line to the script somewhere above the require 'psych' line: ... gem 'psych' ... require 'psych' Gem.load_yaml multiple components, BZ# 1716378 Certain files provided by the Software Collections debuginfo packages might conflict with the corresponding debuginfo package files from the base Red Hat Enterprise Linux system or from other versions of Red Hat Software Collections components. For example, the python27-python-debuginfo package files might conflict with the corresponding files from the python-debuginfo package installed on the core system. Similarly, files from the httpd24-mod_auth_mellon-debuginfo package might conflict with similar files provided by the base system mod_auth_mellon-debuginfo package. To work around this problem, uninstall the base system debuginfo package prior to installing the Software Collection debuginfo package. rh-mysql80 component, BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. To work around this problem, use the mariadb-java-client database connector from the rh-mariadb103 Software Collection. rh-mysql80 component, BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, 64-bit IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. maven component When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven*-maven-local package, XMvn , a tool used for building Java RPM packages, run from the Maven Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. httpd , mariadb , mysql , nodejs , perl , php , python , and ruby components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , or rh-ruby* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exists. mariadb , mysql , postgresql components Red Hat Software Collections contains the MySQL 8.0 , MariaDB 10.3 , MariaDB 10.5 , PostgreSQL 10 , PostgreSQL 12 , and PostgreSQL 13 database servers. The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 10 client library with the PostgreSQL 12 or 13 daemon works as expected. mariadb , mysql components MariaDB and MySQL do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . 1.6. Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). python component To mitigate the Web Cache Poisoning CVE-2021-23336 in the Python urllib library, the default separator for the urllib.parse.parse_qsl and urllib.parse.parse_qs functions is being changed from both ampersand ( & ) and semicolon ( ; ) to only an ampersand. This change has been implemented in the python27 and rh-python38 Software Collections with the release of the RHSA-2021:3252 and RHSA-2021:3254 advisories. The change of the default separator is potentially backwards incompatible, therefore Red Hat provides a way to configure the behavior in Python packages where the default separator has been changed. In addition, the affected urllib parsing functions issue a warning if they detect that a customer's application has been affected by the change. For more information, see the Mitigation of Web Cache Poisoning in the Python urllib library (CVE-2021-23336) Knowledgebase article. python component The release of the RHSA-2021:3254 advisory introduces the following change in the rh-python38 Software Collection: To mitigate CVE-2021-29921 , the Python ipaddress module now rejects IPv4 addresses with leading zeros with an AddressValueError: Leading zeros are not permitted error. Customers who rely on the behavior can pre-process their IPv4 address inputs to strip the leading zeros off. For example: To strip the leading zeros off with an explicit loop for readability, use: 1.7. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl .
[ "LimitRequestBody 2147483648", "superclass mismatch for class Mark (TypeError)", "gem 'psych' require 'psych' Gem.load_yaml", "[mysqld] character-set-server=utf8", "ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems", "Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'", "Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user", "su -l postgres -c \"scl enable rh-postgresql94 psql\"", "scl enable rh-postgresql94 bash su -l postgres -c psql", ">>> def reformat_ip(address): return '.'.join(part.lstrip('0') if part != '0' else part for part in address.split('.')) >>> reformat_ip('0127.0.0.1') '127.0.0.1'", "def reformat_ip(address): parts = [] for part in address.split('.'): if part != \"0\": part = part.lstrip('0') parts.append(part) return '.'.join(parts)" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/chap-rhscl
Chapter 4. Notifications
Chapter 4. Notifications Notifications are sent to admins and members to make it easier to parse developer activity (new account). 4.1. Types of notifications There are different types of notifications: Accounts Billing Applications Service subscriptions Usage alerts 4.2. Visibility Admin users have access to all notifications. Member users have access only to notifications of the areas they have been given access to. For example, a member will only have access to notifications related to billing if they have access to the billing section. For enterprise accounts , member users will only have access to notifications regarding activity of the services they have been granted access to. 4.3. Subscribing to notifications by email Subscriptions are personal and can only be modified by the person receiving those notifications. To edit your subscriptions: Navigate to Account Settings > Personal > Notification Preferences . Select the notifications you would like to receive. Click Update Notification Preferences . 4.4. Web notifications In addition to email notifications, you can find information about the last activities in your Dashboard:
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/notifications
Chapter 1. Overview
Chapter 1. Overview 1.1. Major changes in RHEL 8.7 Installer and image creation Following are image builder key highlights in RHEL 8.7-GA: Image builder on-premise now supports: Uploading images to GCP Customizing the /boot partition Pushing a container image directly to a registry Users can now customize their blueprints during the image creation process For more information, see Section 4.1, "Installer and image creation" . Security The DISA STIG for Red Hat Enterprise Linux 8 profile available in the scap-security-guide (SSG) package is now better aligned with DISA's content. This leads to fewer findings against DISA content after SSG remediations. The Center for Internet Security (CIS) profiles available in the scap-security-guide (SSG) package are now aligned with CIS Red Hat Enterprise Linux 8 Benchmark version 2.0.0. This version of the benchmark adds new requirements, removed requirements that are no longer relevant, and reordered some existing requirements. The update impacts the references in the relevant rules and the accuracy of the respective profiles. Changes in the system configuration and the clevis-luks-systemd subpackage enable the Clevis encryption client to unlock also LUKS-encrypted volumes that mount late in the boot process without using the systemctl enable clevis-luks-askpass.path command during the deployment process. See New features - Security for more information. Shells and command-line tools RHEL 8.7 introduces a new package xmlstarlet . With XMLStarlet , you can parse, transform, query, validate, and edit XML files. The following command-line tools have been updated in RHEL 8.7: opencryptoki to version 3.18.0 powerpc-utils to version 1.3.10 libva to version 2.13.0 For more information, see New Features - Shells and command-line tools Infrastructure services The following infrastructure services tools have been updated in RHEL 8.7: chrony to version 4.2 unbound to version 1.16.2 For more information, see New Features - Infrastructure services . Dynamic programming languages, web and database servers Later versions of the following components are now available as new module streams: Ruby 3.1 Mercurial 6.2 Node.js 18 In addition, Redis 6 has been upgraded to version 6.2.7. See New features - Dynamic programming languages, web and database servers for more information. Compilers and development tools Updated performance tools and debuggers The following performance tools and debuggers have been updated in RHEL 8.7: Valgrind 3.19 SystemTap 4.7 Dyninst 12.1.0 elfutils 0.187 Updated performance monitoring tools The following performance monitoring tools have been updated in RHEL 8.7: PCP 5.3.7 Grafana 7.5.13 Updated compiler toolsets The following compiler toolsets have been updated in RHEL 8.7: GCC Toolset 12 LLVM Toolset 14.0.6 Rust Toolset 1.62 Go Toolset 1.18 See New features - Compilers and development tools for more information. Java implementations in RHEL 8 The RHEL 8 AppStream repository includes: The java-17-openjdk packages, which provide the OpenJDK 17 Java Runtime Environment and the OpenJDK 17 Java Software Development Kit. The java-11-openjdk packages, which provide the OpenJDK 11 Java Runtime Environment and the OpenJDK 11 Java Software Development Kit. The java-1.8.0-openjdk packages, which provide the OpenJDK 8 Java Runtime Environment and the OpenJDK 8 Java Software Development Kit. For more information, see OpenJDK documentation . Java tools RHEL 8.7 introduces Maven 3.8 as a new module stream. For more information, see New features - Compilers and development tools . information. Identity Management Identity Management (IdM) in RHEL 8.7 introduces a Technology Preview where you can delegate user authentication to external identity providers (IdPs) that support the OAuth 2 Device Authorization Grant flow. When these users authenticate with SSSD, and after they complete authentication and authorization at the external IdP, they receive RHEL IdM single sign-on capabilities with Kerberos tickets. For more information, see Technology Previews - Identity Management Red Hat Enterprise Linux System Roles Notable new features in 8.7 RHEL System Roles: RHEL System Roles are now available also in playbooks with fact gathering disabled. The ha_cluster role now supports SBD fencing, configuration of Corosync settings, and configuration of bundle resources. The network role now configures network settings for routing rules, supports network configuration using the nmstate API , and users can create connections with IPoIB capability. The microsoft.sql.server role has new variables, such as variables to control configuring a high availability cluster, to manage firewall ports automatically, or variables to search for mssql_tls_cert and mssql_tls_private_key values on managed nodes. The logging role supports various new options, for example startmsg.regex and endmsg.regex in files inputs, or template , severity and facility options. The storage role now includes support for thinly provisioned volumes, and the role now also has less verbosity by default. The sshd role verifies the include directive for the drop-in directory, and the role can now be managed through /etc/ssh/sshd_config. The metrics role can now export postfix performance data. The postfix role now has a new option for overwriting configuration. The firewall role does not require the state parameter when configuring masquerade or icmp_block_inversion. In the firewall role, you can now add, update, or remove services using absent and present states. The role can also provide Ansible facts, and add or remove an interface to the zone using PCI device ID. The firewall role has a new option for overwriting configuration. The selinux role now includes setting of seuser and selevel parameters. 1.2. In-place upgrade and OS conversion In-place upgrade from RHEL 7 to RHEL 8 The possible in-place upgrade paths currently are: From RHEL 7.9 to RHEL 8.4 and RHEL 8.6 on the 64-bit Intel, IBM POWER 8 (little endian), and IBM Z architectures From RHEL 7.6 to RHEL 8.4 on architectures that require kernel version 4.14: IBM POWER 9 (little endian) and IBM Z (Structure A). This is the final in-place upgrade path for these architectures. From RHEL 7.9 to RHEL 8.2 and RHEL 8.6 on systems with SAP HANA on the 64-bit Intel architecture. To ensure your system remains supported after upgrading to RHEL 8.6, either update to the latest RHEL 8.7 version or ensure that the RHEL 8.6 Extended Update Support (EUS) repositories have been enabled. For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . For instructions on performing an in-place upgrade, see Upgrading from RHEL 7 to RHEL 8 . For instructions on performing an in-place upgrade on systems with SAP environments, see How to in-place upgrade SAP environments from RHEL 7 to RHEL 8 . Note For the successful in-place upgrade of RHEL 7.6 for IBM POWER 9 (little endian) and IBM Z (structure A) architectures, you must manually download the specific Leapp data. For more information, see the Leapp data snapshots for an in-place upgrade Knowledgebase article. Notable enhancements include: The in-place upgrade of SAP Apps systems is now possible on Microsoft Azure with Red Hat Update Infrastructure (RHUI). The in-place upgrade is now possible on Google Cloud Platform with Red Hat Update Infrastructure (RHUI). In-place upgrade from RHEL 6 to RHEL 8 To upgrade from RHEL 6.10 to RHEL 8, follow instructions in Upgrading from RHEL 6 to RHEL 8 . In-place upgrade from RHEL 8 to RHEL 9 Instructions on how to perform an in-place upgrade from RHEL 8 to RHEL 9 using the Leapp utility are provided by the document Upgrading from RHEL 8 to RHEL 9 . Major differences between RHEL 8 and RHEL 9 are documented in Considerations in adopting RHEL 9 . Conversion from a different Linux distribution to RHEL If you are using CentOS Linux 8 or Oracle Linux 8, you can convert your operating system to RHEL 8 using the Red Hat-supported Convert2RHEL utility. For more information, see Converting from a Linux distribution to RHEL using the Convert2RHEL utility . If you are using an earlier version of CentOS Linux or Oracle Linux, namely versions 6 or 7, you can convert your operating system to RHEL and then perform an in-place upgrade to RHEL 8. Note that CentOS Linux 6 and Oracle Linux 6 conversions use the unsupported Convert2RHEL utility. For more information on unsupported conversions, see How to perform an unsupported conversion from a RHEL-derived Linux distribution to RHEL . For information regarding how Red Hat supports conversions from other Linux distributions to RHEL, see the Convert2RHEL Support Policy document . 1.3. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Product Life Cycle Checker Kickstart Generator Kickstart Converter Red Hat Enterprise Linux Upgrade Helper Red Hat Satellite Upgrade Helper Red Hat Code Browser JVM Options Configuration Tool Red Hat CVE Checker Red Hat Product Certificates Load Balancer Configuration Tool Yum Repository Configuration Helper Red Hat Memory Analyzer Kernel Oops Analyzer Red Hat Product Errata Advisory Checker Red Hat Out of Memory Analyzer 1.4. Additional resources Capabilities and limits of Red Hat Enterprise Linux 8 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 8. Major differences between RHEL 7 and RHEL 8 , including removed functionality, are documented in Considerations in adopting RHEL 8 . Instructions on how to perform an in-place upgrade from RHEL 7 to RHEL 8 are provided by the document Upgrading from RHEL 7 to RHEL 8 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is now available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.7_release_notes/overview
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.py example. USD cd /usr/share/proton/examples/python/ USD python helloworld.py Hello World! 3.3. Running Hello World on Microsoft Windows The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Download and run the Hello World example. > curl -o helloworld.py https://raw.githubusercontent.com/apache/qpid-proton/master/python/examples/helloworld.py > python helloworld.py Hello World!
[ "cd /usr/share/proton/examples/python/ python helloworld.py Hello World!", "> curl -o helloworld.py https://raw.githubusercontent.com/apache/qpid-proton/master/python/examples/helloworld.py > python helloworld.py Hello World!" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_python_client/getting_started
Appendix A. List of tickets by component
Appendix A. List of tickets by component Bugzilla and JIRA tickets are listed in this document for reference. The links lead to the release notes in this document that describe the tickets. Component Tickets 389-ds-base Jira:RHEL-19028 , Jira:RHEL-19240 , Jira:RHEL-5390 , Jira:RHEL-5143 , Jira:RHEL-5107 , Jira:RHEL-16338 , Jira:RHEL-14025 , Jira:RHEL-5135 Release Notes Jira:RHELDOCS-17954 , Jira:RHELDOCS-18326 , Jira:RHELDOCS-16861 , Jira:RHELDOCS-16755 , Jira:RHELDOCS-16612 , Jira:RHELDOCS-17102 , Jira:RHELDOCS-17518 SLOF Bugzilla:1910848 accel-config Bugzilla:1843266 anaconda Jira:RHEL-13151 , Bugzilla:2050140 , Jira:RHEL-4707 , Jira:RHEL-4711 , Jira:RHEL-4744 ansible-collection-microsoft-sql Jira:RHEL-19204 , Jira:RHEL-19202 , Jira:RHEL-19203 ansible-freeipa Jira:RHEL-16938 , Jira:RHEL-16933 , Jira:RHEL-19133 , Jira:RHEL-4963 , Jira:RHEL-19129 ant Jira:RHEL-5365 apr Bugzilla:1819607 audit Jira:RHEL-15001 authselect Bugzilla:1892761 bacula Jira:RHEL-6859 brltty Bugzilla:2008197 chrony Jira:RHEL-21069 clang Jira:RHEL-9299 cloud-init Jira:RHEL-7312 , Jira:RHEL-7278 , Jira:RHEL-12122 cmake Jira:RHEL-7396 cockpit Bugzilla:1666722 coreutils Bugzilla:2030661 corosync-qdevice Bugzilla:1784200 crash Jira:RHEL-9010 , Bugzilla:1906482 crash-ptdump-command Bugzilla:1838927 createrepo_c Bugzilla:1973588 crypto-policies Jira:RHEL-2345 , Bugzilla:1919155 , Bugzilla:1660839 device-mapper-multipath Jira:RHEL-6677 , Jira:RHEL-16563 , Bugzilla:2022359 , Bugzilla:2011699 distribution Jira:RHEL-17090 , Bugzilla:1657927 dnf Bugzilla:1986657 dnf-plugins-core Jira:RHEL-17356 , Jira:RHELPLAN-50409 edk2 Bugzilla:1741615 , Bugzilla:1935497 elfutils Jira:RHEL-15924 fapolicyd Jira:RHEL-520 , Bugzilla:2054741 fence-agents Bugzilla:1775847 firewalld Bugzilla:1871860 gcc-toolset-13-binutils Jira:RHEL-25405 gdb Bugzilla:1853140 git Jira:RHEL-17103 git-lfs Jira:RHEL-17102 glibc Jira:RHEL-13720 , Jira:RHEL-10481 , Jira:RHEL-19824 gnome-shell-extensions Bugzilla:1717947 gnome-software Bugzilla:1668760 gnutls Bugzilla:1628553 golang Jira:RHEL-11872 grafana Jira:RHEL-7503 grub2 Jira:RHEL-15856 , Jira:RHEL-15583 , Bugzilla:1583445 initscripts Bugzilla:1875485 ipa Jira:RHEL-16936 , Jira:RHEL-12153 , Jira:RHEL-4964 , Jira:RHEL-10495 , Jira:RHEL-4847 , Jira:RHEL-4898 , Bugzilla:1664719 , Bugzilla:1664718 ipmitool Jira:RHEL-6846 kernel Bugzilla:2041881 , Jira:RHEL-11597 , Bugzilla:1868526 , Bugzilla:1694705 , Bugzilla:1730502 , Bugzilla:1609288 , Bugzilla:1602962 , Bugzilla:1865745 , Bugzilla:1906870 , Bugzilla:1924016 , Bugzilla:1942888 , Bugzilla:1812577 , Bugzilla:1910358 , Bugzilla:1930576 , Bugzilla:1793389 , Bugzilla:1654962 , Bugzilla:1940674 , Bugzilla:1920086 , Bugzilla:1971506 , Bugzilla:2059262 , Bugzilla:2050411 , Bugzilla:2106341 , Bugzilla:1605216 , Bugzilla:1519039 , Bugzilla:1627455 , Bugzilla:1501618 , Bugzilla:1633143 , Bugzilla:1814836 , Bugzilla:1839311 , Bugzilla:1696451 , Bugzilla:1348508 , Bugzilla:1836977 , Bugzilla:1878207 , Bugzilla:1665295 , Bugzilla:1871863 , Bugzilla:1569610 , Bugzilla:1794513 kernel / DMA Engine Jira:RHEL-10097 kernel / Networking / NIC Drivers Jira:RHEL-11398 kernel / Networking / Protocol / tcp Jira:RHEL-6113 kernel / Storage / Device Mapper / Crypt Jira:RHEL-22232 kernel / Storage / Storage Drivers Jira:RHEL-1765 kernel / Virtualization / KVM Jira:RHEL-2451 kernel-rt / Other Jira:RHEL-9318 kexec-tools Bugzilla:2111855 kmod Bugzilla:2103605 krb5 Jira:RHEL-4910 , Bugzilla:2125318 , Bugzilla:1877991 libdnf Jira:RHEL-6421 libgnome-keyring Bugzilla:1607766 libguestfs Bugzilla:1554735 libkcapi Jira:RHEL-5366 , Jira:RHEL-15300 librdkafka Jira:RHEL-12892 librepo Jira:RHEL-10720 libreswan Bugzilla:1989050 libselinux-python-2.8-module Bugzilla:1666328 libvirt Bugzilla:1664592 , Bugzilla:1332758 , Bugzilla:2143160 , Bugzilla:1528684 linuxptp Jira:RHEL-21326 llvm-toolset Jira:RHEL-9028 lvm2 Bugzilla:1496229 , Bugzilla:1768536 mariadb Jira:RHEL-3637 , Bugzilla:1942330 maven Jira:RHEL-17126 mesa Bugzilla:1886147 nfs-utils Bugzilla:2081114 , Bugzilla:1592011 nginx Jira:RHEL-14714 nispor Bugzilla:2153166 nss Bugzilla:1817533 , Bugzilla:1645153 nss_nis Bugzilla:1803161 opencryptoki Jira:RHEL-11413 opencv Bugzilla:1886310 openmpi Bugzilla:1866402 opensc Jira:RHEL-4077 , Bugzilla:1947025 openscap Bugzilla:2161499 openssh Jira:RHEL-1684 , Jira:RHEL-5279 , Bugzilla:2044354 openssl Jira:RHEL-17689 , Bugzilla:1810911 osbuild-composer Jira:RHEL-4649 oscap-anaconda-addon Jira:RHEL-1826 , Bugzilla:1843932 , Bugzilla:1834716 , Bugzilla:1665082 , Jira:RHEL-1810 papi Jira:RHEL-9336 pcs Jira:RHEL-7584 , Jira:RHEL-7668 , Jira:RHEL-7731 , Jira:RHEL-7745 , Bugzilla:1619620 , Bugzilla:1851335 perl-DateTime-TimeZone Jira:RHEL-35685 php Jira:RHEL-14705 pki-core Bugzilla:1729215 , Jira:RHEL-13125 , Bugzilla:1628987 podman Jira:RHELPLAN-167794 , Jira:RHELPLAN-167830 , Jira:RHELPLAN-167822 , Jira:RHELPLAN-168179 , Jira:RHELPLAN-168184 , Jira:RHELPLAN-154435 , Jira:RHELPLAN-168223 policycoreutils Jira:RHEL-24461 polkit Jira:RHEL-34022 postfix Bugzilla:1711885 postgresql Jira:RHEL-3636 pykickstart Bugzilla:1637872 python3.11-lxml Bugzilla:2157673 python36-3.6-module Bugzilla:2165702 qemu-kvm Jira:RHEL-11597 , Jira:RHEL-16696 , Jira:RHEL-13336 , Bugzilla:1740002 , Bugzilla:1719687 , Bugzilla:1966475 , Bugzilla:1792683 , Bugzilla:1651994 rear Jira:RHEL-24729 , Jira:RHEL-17354 , Jira:RHEL-17353 , Bugzilla:1925531 , Bugzilla:2083301 redhat-support-tool Bugzilla:2064575 restore Bugzilla:1997366 rhel-system-roles Jira:RHEL-18170 , Jira:RHEL-3241 , Jira:RHEL-15440 , Jira:RHEL-4624 , Jira:RHEL-16542 , Jira:RHEL-21491 , Jira:RHEL-16965 , Jira:RHEL-16553 , Jira:RHEL-18963 , Jira:RHEL-16975 , Jira:RHEL-21123 , Jira:RHEL-19047 , Jira:RHEL-15038 , Jira:RHEL-21134 , Jira:RHEL-14022 , Jira:RHEL-17667 , Jira:RHEL-15933 , Jira:RHEL-16213 , Jira:RHEL-16977 , Jira:RHEL-5985 , Jira:RHEL-4684 , Jira:RHEL-16501 , Jira:RHEL-17874 , Jira:RHEL-21946 , Jira:RHEL-21400 , Jira:RHEL-15871 , Jira:RHEL-22228 , Jira:RHEL-22229 , Jira:RHEL-3354 , Jira:RHEL-19042 , Jira:RHEL-19044 , Jira:RHEL-19242 , Jira:RHEL-21402 , Jira:RHEL-25509 , Bugzilla:2186908 , Bugzilla:2021685 , Bugzilla:2006081 rpm Bugzilla:1688849 rsyslog Bugzilla:1679512 , Jira:RHELPLAN-10431 rteval Jira:RHEL-8967 , Jira:RHEL-21926 rtla Jira:RHEL-10081 rust-toolset Jira:RHEL-12964 samba Jira:RHEL-16483 , Bugzilla:2009213 , Jira:RHELPLAN-13195 scap-security-guide Jira:RHEL-25250 , Bugzilla:2028428 , Bugzilla:2118758 , Jira:RHEL-1804 , Jira:RHEL-1897 selinux-policy Jira:RHEL-9981 , Jira:RHEL-1388 , Jira:RHEL-15398 , Jira:RHEL-1628 , Jira:RHEL-10087 , Bugzilla:2166153 , Bugzilla:1461914 sos Bugzilla:2011413 spice Bugzilla:1849563 sssd Jira:SSSD-7015 , Bugzilla:2065692 , Bugzilla:2056483 , Bugzilla:1947671 sssd_kcm Jira:SSSD-7015 stunnel Jira:RHEL-2340 subscription-manager Bugzilla:2170082 sysstat Jira:RHEL-12008 , Jira:RHEL-23074 tuna Jira:RHEL-19179 tuned Bugzilla:2113900 udica Bugzilla:1763210 valgrind Jira:RHEL-15926 vdo Bugzilla:1949163 virt-manager Bugzilla:2026985 wayland Bugzilla:1673073 webkit2gtk3 Jira:RHEL-4158 xorg-x11-server Bugzilla:1698565 other Jira:RHELDOCS-17369 , Jira:RHELDOCS-17372 , Jira:RHELDOCS-16955 , Jira:RHELDOCS-16241 , Jira:RHELDOCS-16970 , Jira:RHELDOCS-17060 , Jira:RHELDOCS-17056 , Jira:RHELDOCS-16337 , Jira:RHELDOCS-17261 , Jira:RHELPLAN-123140 , Jira:RHELDOCS-18289 , Jira:RHELDOCS-18323 , Jira:SSSD-6184 , Bugzilla:2025814 , Bugzilla:2077770 , Bugzilla:1777138 , Bugzilla:1640697 , Bugzilla:1697896 , Bugzilla:1961722 , Jira:RHELDOCS-18064 , Jira:RHELDOCS-18049 , Bugzilla:1659609 , Bugzilla:1687900 , Bugzilla:1757877 , Bugzilla:1741436 , Jira:RHELPLAN-27987 , Jira:RHELPLAN-34199 , Jira:RHELPLAN-57914 , Jira:RHELPLAN-96940 , Bugzilla:1974622 , Bugzilla:2028361 , Bugzilla:2041997 , Bugzilla:2035158 , Jira:RHELPLAN-109613 , Bugzilla:2126777 , Jira:RHELDOCS-17126 , Bugzilla:1690207 , Bugzilla:1559616 , Bugzilla:1889737 , Bugzilla:1906489 , Bugzilla:1769727 , Jira:RHELPLAN-27394 , Jira:RHELPLAN-27737 , Jira:RHELDOCS-16861 , Bugzilla:1642765 , Bugzilla:1646541 , Bugzilla:1647725 , Bugzilla:1932222 , Bugzilla:1686057 , Bugzilla:1748980 , Jira:RHELPLAN-71200 , Jira:RHELPLAN-45858 , Bugzilla:1871025 , Bugzilla:1871953 , Bugzilla:1874892 , Bugzilla:1916296 , Jira:RHELDOCS-17573 , Jira:RHELPLAN-100400 , Bugzilla:1926114 , Bugzilla:1904251 , Bugzilla:2011208 , Jira:RHELPLAN-59825 , Bugzilla:1920624 , Jira:RHELPLAN-70700 , Bugzilla:1929173 , Jira:RHELPLAN-85066 , Jira:RHELPLAN-98983 , Bugzilla:2009113 , Bugzilla:1958250 , Bugzilla:2038929 , Bugzilla:2006665 , Bugzilla:2029338 , Bugzilla:2061288 , Bugzilla:2060759 , Bugzilla:2055826 , Bugzilla:2059626 , Jira:RHELPLAN-133171 , Bugzilla:2142499 , Jira:RHELDOCS-16755 , Jira:RHELPLAN-146398 , Jira:RHELDOCS-18107 , Jira:RHELPLAN-153267 , Bugzilla:2225332 , Jira:RHELPLAN-147538 , Jira:RHELDOCS-16612 , Jira:RHELDOCS-17102 , Jira:RHELDOCS-16300 , Jira:RHELDOCS-17038 , Jira:RHELDOCS-17461 , Jira:RHELDOCS-17518 , Jira:RHELDOCS-17623
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/list_of_tickets_by_component
Chapter 12. Log collection and forwarding
Chapter 12. Log collection and forwarding 12.1. About log collection and forwarding The Red Hat OpenShift Logging Operator deploys a collector based on the ClusterLogForwarder resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. 12.1.1. Log collection The log collector is a daemon set that deploys pods to each OpenShift Container Platform node to collect container and node logs. By default, the log collector uses the following sources: System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform. /var/log/containers/*.log for all container logs. If you configure the log collector to collect audit logs, it collects them from /var/log/audit/audit.log . The log collector collects the logs from these sources and forwards them internally or externally depending on your logging configuration. 12.1.1.1. Log collector types Vector is a log collector offered as an alternative to Fluentd for the logging. You can configure which logging collector type your cluster uses by modifying the ClusterLogging custom resource (CR) collection spec: Example ClusterLogging CR that configures Vector as the collector apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {} # ... 12.1.1.2. Log collection limitations The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered best effort . Important The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source. 12.1.1.3. Log collector features by type Table 12.1. Log Sources Feature Fluentd Vector App container logs [✓] [✓] App-specific routing [✓] [✓] App-specific routing by namespace [✓] [✓] Infra container logs [✓] [✓] Infra journal logs [✓] [✓] Kube API audit logs [✓] [✓] OpenShift API audit logs [✓] [✓] Open Virtual Network (OVN) audit logs [✓] [✓] Table 12.2. Authorization and Authentication Feature Fluentd Vector Elasticsearch certificates [✓] [✓] Elasticsearch username / password [✓] [✓] Amazon Cloudwatch keys [✓] [✓] Amazon Cloudwatch STS [✓] [✓] Kafka certificates [✓] [✓] Kafka username / password [✓] [✓] Kafka SASL [✓] [✓] Loki bearer token [✓] [✓] Table 12.3. Normalizations and Transformations Feature Fluentd Vector Viaq data model - app [✓] [✓] Viaq data model - infra [✓] [✓] Viaq data model - infra(journal) [✓] [✓] Viaq data model - Linux audit [✓] [✓] Viaq data model - kube-apiserver audit [✓] [✓] Viaq data model - OpenShift API audit [✓] [✓] Viaq data model - OVN [✓] [✓] Loglevel Normalization [✓] [✓] JSON parsing [✓] [✓] Structured Index [✓] [✓] Multiline error detection [✓] [✓] Multicontainer / split indices [✓] [✓] Flatten labels [✓] [✓] CLF static labels [✓] [✓] Table 12.4. Tuning Feature Fluentd Vector Fluentd readlinelimit [✓] Fluentd buffer [✓] - chunklimitsize [✓] - totallimitsize [✓] - overflowaction [✓] - flushthreadcount [✓] - flushmode [✓] - flushinterval [✓] - retrywait [✓] - retrytype [✓] - retrymaxinterval [✓] - retrytimeout [✓] Table 12.5. Visibility Feature Fluentd Vector Metrics [✓] [✓] Dashboard [✓] [✓] Alerts [✓] [✓] Table 12.6. Miscellaneous Feature Fluentd Vector Global proxy support [✓] [✓] x86 support [✓] [✓] ARM support [✓] [✓] IBM Power(R) support [✓] [✓] IBM Z(R) support [✓] [✓] IPv6 support [✓] [✓] Log event buffering [✓] Disconnected Cluster [✓] [✓] 12.1.1.4. Collector outputs The following collector outputs are supported: Table 12.7. Supported outputs Feature Fluentd Vector Elasticsearch v6-v8 [✓] [✓] Fluent forward [✓] Syslog RFC3164 [✓] [✓] (Logging 5.7+) Syslog RFC5424 [✓] [✓] (Logging 5.7+) Kafka [✓] [✓] Amazon Cloudwatch [✓] [✓] Amazon Cloudwatch STS [✓] [✓] Loki [✓] [✓] HTTP [✓] [✓] (Logging 5.7+) Google Cloud Logging [✓] [✓] Splunk [✓] (Logging 5.6+) 12.1.2. Log forwarding Administrators can create ClusterLogForwarder resources that specify which logs are collected, how they are transformed, and where they are forwarded to. ClusterLogForwarder resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely. Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs. 12.1.2.1. Log forwarding implementations There are two log forwarding implementations available: the legacy implementation, and the multi log forwarder feature. Important Only the Vector collector is supported for use with the multi log forwarder feature. The Fluentd collector can only be used with legacy implementations. 12.1.2.1.1. Legacy implementation In legacy implementations, you can only use one log forwarder in your cluster. The ClusterLogForwarder resource in this mode must be named instance , and must be created in the openshift-logging namespace. The ClusterLogForwarder resource also requires a corresponding ClusterLogging resource named instance in the openshift-logging namespace. 12.1.2.1.2. Multi log forwarder feature The multi log forwarder feature is available in logging 5.8 and later, and provides the following functionality: Administrators can control which users are allowed to define log collection and which logs they are allowed to collect. Users who have the required permissions are able to specify additional log collection configurations. Administrators who are migrating from the deprecated Fluentd collector to the Vector collector can deploy a new log forwarder separately from their existing deployment. The existing and new log forwarders can operate simultaneously while workloads are being migrated. In multi log forwarder implementations, you are not required to create a corresponding ClusterLogging resource for your ClusterLogForwarder resource. You can create multiple ClusterLogForwarder resources using any name, in any namespace, with the following exceptions: You cannot create a ClusterLogForwarder resource named instance in the openshift-logging namespace, because this is reserved for a log forwarder that supports the legacy workflow using the Fluentd collector. You cannot create a ClusterLogForwarder resource named collector in the openshift-logging namespace, because this is reserved for the collector. 12.1.2.2. Enabling the multi log forwarder feature for a cluster To use the multi log forwarder feature, you must create a service account and cluster role bindings for that service account. You can then reference the service account in the ClusterLogForwarder resource to control access permissions. Important In order to support multi log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces . This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations. 12.1.2.2.1. Authorizing log collection RBAC permissions In logging 5.8 and later, the Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. You can authorize RBAC permissions for log collection by binding the required cluster roles to a service account. Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> Additional resources Using RBAC to define and apply permissions Using service accounts in applications Using RBAC Authorization Kubernetes documentation 12.2. Log output types Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols. 12.2.1. Supported log forwarding outputs Outputs can be any of the following types: Table 12.8. Supported log output types Output type Protocol Tested with Logging versions Supported collector type Elasticsearch v6 HTTP 1.1 6.8.1, 6.8.23 5.6+ Fluentd, Vector Elasticsearch v7 HTTP 1.1 7.12.2, 7.17.7, 7.10.1 5.6+ Fluentd, Vector Elasticsearch v8 HTTP 1.1 8.4.3, 8.6.1 5.6+ Fluentd [1] , Vector Fluent Forward Fluentd forward v1 Fluentd 1.14.6, Logstash 7.10.1, Fluentd 1.14.5 5.4+ Fluentd Google Cloud Logging REST over HTTPS Latest 5.7+ Vector HTTP HTTP 1.1 Fluentd 1.14.6, Vector 0.21 5.7+ Fluentd, Vector Kafka Kafka 0.11 Kafka 2.4.1, 2.7.0, 3.3.1 5.4+ Fluentd, Vector Loki REST over HTTP and HTTPS 2.3.0, 2.5.0, 2.7, 2.2.1 5.4+ Fluentd, Vector Splunk HEC 8.2.9, 9.0.0 5.7+ Vector Syslog RFC3164, RFC5424 Rsyslog 8.37.0-9.el7, rsyslog-8.39.0 5.4+ Fluentd, Vector [2] Amazon CloudWatch REST over HTTPS Latest 5.4+ Fluentd, Vector Fluentd does not support Elasticsearch 8 in the logging version 5.6.2. Vector supports Syslog in the logging version 5.7 and higher. 12.2.2. Output type descriptions default The on-cluster, Red Hat managed log store. You are not required to configure the default output. Note If you configure a default output, you receive an error message, because the default output name is reserved for referencing the on-cluster, Red Hat managed log store. loki Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. kafka A Kafka broker. The kafka output can use a TCP or TLS connection. elasticsearch An external Elasticsearch instance. The elasticsearch output can use a TLS connection. fluentdForward An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The fluentForward output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS. Important The fluentdForward output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using the http output. syslog An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The syslog output can use a UDP, TCP, or TLS connection. cloudwatch Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). cloudlogging Google Cloud Logging, a monitoring and log storage service hosted by Google Cloud Platform (GCP). 12.3. Enabling JSON log forwarding You can configure the Log Forwarding API to parse JSON strings into a structured object. 12.3.1. Parsing JSON logs You can use a ClusterLogForwarder object to parse JSON logs into a structured object and forward them to a supported output. To illustrate how this works, suppose that you have the following structured JSON log entry: Example structured JSON log entry {"level":"info","name":"fred","home":"bedrock"} To enable parsing JSON log, you add parse: json to a pipeline in the ClusterLogForwarder CR, as shown in the following example: Example snippet showing parse: json pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json When you enable parsing JSON logs by using parse: json , the CR copies the JSON-structured log entry in a structured field, as shown in the following example: Example structured output containing the structured JSON log entry {"structured": { "level": "info", "name": "fred", "home": "bedrock" }, "more fields..."} Important If the log entry does not contain valid structured JSON, the structured field is absent. 12.3.2. Configuring JSON log data for Elasticsearch If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the ClusterLogForwarder custom resource (CR) to group each schema into a single output definition. This way, each schema is forwarded to a separate index. Important If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Structure types You can use the following structure types in the ClusterLogForwarder CR to construct index names for the Elasticsearch log store: structuredTypeKey is the name of a message field. The value of that field is used to construct the index name. kubernetes.labels.<key> is the Kubernetes pod label whose value is used to construct the index name. openshift.labels.<key> is the pipeline.label.<key> element in the ClusterLogForwarder CR whose value is used to construct the index name. kubernetes.container_name uses the container name to construct the index name. structuredTypeName : If the structuredTypeKey field is not set or its key is not present, the structuredTypeName value is used as the structured type. When you use both the structuredTypeKey field and the structuredTypeName field together, the structuredTypeName value provides a fallback index name if the key in the structuredTypeKey field is missing from the JSON log data. Note Although you can set the value of structuredTypeKey to any field shown in the "Log Record Fields" topic, the most useful fields are shown in the preceding list of structure types. A structuredTypeKey: kubernetes.labels.<key> example Suppose the following: Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google". The user labels these application pods with logFormat=apache and logFormat=google . You use the following snippet in your ClusterLogForwarder CR YAML file. apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2 1 Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. 2 Enables parsing JSON logs. In that case, the following structured log record goes to the app-apache-write index: And the following structured log record goes to the app-google-write index: A structuredTypeKey: openshift.labels.<key> example Suppose that you use the following snippet in your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2 1 Uses the value of the key-value pair that is formed by the OpenShift myLabel label. 2 The myLabel element gives its string value, myValue , to the structured log record. In that case, the following structured log record goes to the app-myValue-write index: Additional considerations The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write". Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices. If there is no non-empty structured type, forward an unstructured record with no structured field. It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats , not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as LogApache . 12.3.3. Forwarding JSON logs to the Elasticsearch log store For an Elasticsearch log store, if your JSON log entries follow different schemas , configure the ClusterLogForwarder custom resource (CR) to group each JSON schema into a single output definition. This way, Elasticsearch uses a separate index for each schema. Important Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Procedure Add the following snippet to your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json Use structuredTypeKey field to specify one of the log record fields. Use structuredTypeName field to specify a name. Important To parse JSON logs, you must set both the structuredTypeKey and structuredTypeName fields. For inputRefs , specify which log types to forward by using that pipeline, such as application, infrastructure , or audit . Add the parse: json element to pipelines. Create the CR object: USD oc create -f <filename>.yaml The Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy. USD oc delete pod --selector logging-infra=collector 12.3.4. Forwarding JSON logs from containers in the same pod to separate indices You can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. Logs are written to indices with a prefix of app- . It is recommended that Elasticsearch be configured with aliases to accommodate this. Important JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats. Prerequisites Logging for Red Hat OpenShift: 5.5 Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json 1 Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. 2 Enables multi-container outputs. Create or edit a YAML file that defines the Pod CR object: apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage 1 Format: containerType.logging.openshift.io/<container-name>: <index> 2 Annotation names must match container names Warning This configuration might significantly increase the number of shards on the cluster. Additional resources Kubernetes Annotations Additional resources About log forwarding 12.4. Configuring log forwarding In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the ClusterLogging custom resource (CR) by default. Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. If this default configuration meets your needs, you do not need to configure a ClusterLogForwarder CR. If a ClusterLogForwarder CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the default output. 12.4.1. About forwarding logs to third-party systems To send logs to specific endpoints inside and outside your OpenShift Container Platform cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object. pipeline Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following: application . Container logs generated by user applications running in the cluster, except infrastructure container applications. infrastructure . Container logs from pods that run in the openshift* , kube* , or default projects and journal logs sourced from node file system. audit . Audit logs generated by the node audit system, auditd , Kubernetes API server, OpenShift API server, and OVN network. You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message. input Forwards the application logs associated with a specific project to a pipeline. In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter. Secret A key:value map that contains confidential data such as user credentials. Note the following: If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped. You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols. The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs project to the internal Elasticsearch instance. Sample log forwarding outputs and pipelines apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: "elasticsearch" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: "kafka" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: "true" 9 datacenter: "east" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: "west" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: "south" 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Configuration for an secure Elasticsearch output using a secret with a secure URL. A name to describe the output. The type of output: elasticsearch . The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. The secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project. 5 Configuration for an insecure Elasticsearch output: A name to describe the output. The type of output: elasticsearch . The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. 6 Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL: A name to describe the output. The type of output: kafka . Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix. 7 Configuration for an input to filter application logs from the my-project namespace. 8 Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance: A name to describe the pipeline. The inputRefs is the log type, in this example audit . The outputRefs is the name of the output to use, in this example elasticsearch-secure to forward to the secure Elasticsearch instance and default to forward to the internal Elasticsearch instance. Optional: Labels to add to the logs. 9 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 10 Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. 11 Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance. A name to describe the pipeline. The inputRefs is a specific input: my-app-logs . The outputRefs is default . Optional: String. One or more labels to add to the logs. 12 Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name: The inputRefs is the log type, in this example application . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Fluentd log handling when the external log aggregator is unavailable If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods. Supported Authorization Keys Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations. Transport Layer Security (TLS) Using a TLS URL ( http://... or ssl://... ) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields: passphrase : (string) Passphrase to decode an encoded TLS private key. Requires tls.key . ca-bundle.crt : (string) File name of a customer CA for server authentication. Username and Password username : (string) Authentication user name. Requires password . password : (string) Authentication password. Requires username . Simple Authentication Security Layer (SASL) sasl.enable (boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other sasl. keys are set. sasl.mechanisms : (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used. sasl.allow-insecure : (boolean) Allow mechanisms that send clear-text passwords. Defaults to false. 12.4.1.1. Creating a Secret You can create a secret in the directory that contains your certificate and key files by using the following command: USD oc create secret generic -n <namespace> <secret_name> \ --from-file=ca-bundle.crt=<your_bundle_file> \ --from-literal=username=<your_username> \ --from-literal=password=<your_password> Note Generic or opaque secrets are recommended for best results. 12.4.2. Creating a log forwarder To create a log forwarder, you must create a ClusterLogForwarder CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. If you are using the multi log forwarder feature, you must also reference the service account in the ClusterLogForwarder CR. If you are using the multi log forwarder feature on your cluster, you can create ClusterLogForwarder custom resources (CRs) in any namespace, using any name. If you are using a legacy implementation, the ClusterLogForwarder CR must be named instance , and must be created in the openshift-logging namespace. Important You need administrator permissions for the namespace where you create the ClusterLogForwarder CR. ClusterLogForwarder resource example apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8 # ... 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 5 7 The type of output that you want to forward logs to. The value of this field can be default , loki , kafka , elasticsearch , fluentdForward , syslog , or cloudwatch . Note The default output type is not supported in mutli log forwarder implementations. 6 A name for the output that you want to forward logs to. 8 The URL of the output that you want to forward logs to. 12.4.3. Tuning log payloads and delivery In logging 5.9 and newer versions, the tuning spec in the ClusterLogForwarder custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs. For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput. Important To use this feature, your logging deployment must be configured to use the Vector collector. The tuning spec in the ClusterLogForwarder CR is not supported when using the Fluentd collector. The following example shows the ClusterLogForwarder CR options that you can modify to tune log forwarder outputs: Example ClusterLogForwarder CR tuning options apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: tuning: delivery: AtLeastOnce 1 compression: none 2 maxWrite: <integer> 3 minRetryDuration: 1s 4 maxRetryDuration: 1s 5 # ... 1 Specify the delivery mode for log forwarding. AtLeastOnce delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash. AtMostOnce delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss. 2 Specifying a compression configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. The possible values for this configuration are none for no compression, gzip , snappy , zlib , or zstd . lz4 compression is also available if you are using a Kafka output. See the table "Supported compression types for tuning outputs" for more information. 3 Specifies a limit for the maximum payload of a single send operation to the output. 4 Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds ( ms ), seconds ( s ), or minutes ( m ). 5 Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds ( ms ), seconds ( s ), or minutes ( m ). Table 12.9. Supported compression types for tuning outputs Compression algorithm Splunk Amazon Cloudwatch Elasticsearch 8 LokiStack Apache Kafka HTTP Syslog Google Cloud Microsoft Azure Monitoring gzip X X X X X snappy X X X X zlib X X X zstd X X X lz4 X 12.4.4. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true . Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true 12.4.4.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. Table 12.10. Supported languages per collector Language Fluentd Vector Java [✓] [✓] JS [✓] [✓] Ruby [✓] [✓] Python [✓] [✓] Golang [✓] [✓] PHP [✓] [✓] Dart [✓] [✓] 12.4.4.2. Troubleshooting When enabled, the collector configuration will include a new section with type: detect_exceptions Example vector configuration section Example fluentd config section 12.4.5. Forwarding logs to Google Cloud Platform (GCP) You can forward logs to Google Cloud Logging in addition to, or instead of, the internal default OpenShift Container Platform log store. Note Using this feature with Fluentd is not supported. Prerequisites Red Hat OpenShift Logging Operator 5.5.1 and later Procedure Create a secret using your Google service account key . USD oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json> Create a ClusterLogForwarder Custom Resource YAML using the template below: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : "openshift-gce-devel" 4 logId : "app-gcp" 5 pipelines: - name: test-app inputRefs: 6 - application outputRefs: - gcp-1 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Set a projectId , folderId , organizationId , or billingAccountId field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy . 5 Set the value to add to the logName field of the Log Entry . 6 Specify which log types to forward by using the pipeline: application , infrastructure , or audit . Additional resources Google Cloud Billing Documentation Google Cloud Logging Query Language Documentation 12.4.6. Forwarding logs to Splunk You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default OpenShift Container Platform log store. Note Using this feature with Fluentd is not supported. Prerequisites Red Hat OpenShift Logging Operator 5.6 or later A ClusterLogging instance with vector specified as the collector Base64 encoded Splunk HEC token Procedure Create a secret using your Base64 encoded Splunk HEC token. USD oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token> Create or edit the ClusterLogForwarder Custom Resource (CR) using the template below: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the name of the secret that contains your HEC token. 6 Specify the output type as splunk . 7 Specify the URL (including port) of your Splunk HEC. 8 Specify which log types to forward by using the pipeline: application , infrastructure , or audit . 9 Optional: Specify a name for the pipeline. 10 Specify the name of the output to use when forwarding logs with this pipeline. 12.4.7. Forwarding logs over HTTP Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify http as the output type in the ClusterLogForwarder custom resource (CR). Procedure Create or edit the ClusterLogForwarder CR using the template below: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app 8 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Destination address for logs. 5 Additional headers to send with the log record. 6 Secret name for destination credentials. 7 Values are either true or false . 8 This value should be the same as the output name. 12.4.8. Forwarding to Azure Monitor Logs With logging 5.9 and later, you can forward logs to Azure Monitor Logs in addition to, or instead of, the default log store. This functionality is provided by the Vector Azure Monitor Logs sink . Prerequisites You are familiar with how to administer and create a ClusterLogging custom resource (CR) instance. You are familiar with how to administer and create a ClusterLogForwarder CR instance. You understand the ClusterLogForwarder CR specifications. You have basic familiarity with Azure services. You have an Azure account configured for Azure Portal or Azure CLI access. You have obtained your Azure Monitor Logs primary or the secondary security key. You have determined which log types to forward. To enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API: Create a secret with your shared key: apiVersion: v1 kind: Secret metadata: name: my-secret namespace: openshift-logging type: Opaque data: shared_key: <your_shared_key> 1 1 Must contain a primary or secondary key for the Log Analytics workspace making the request. To obtain a shared key , you can use this command in Azure CLI: Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>" Create or edit your ClusterLogForwarder CR using the template matching your log selection. Forward all logs apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id 1 logType: my_log_type 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor 1 Unique identifier for the Log Analytics workspace. Required field. 2 Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. Forward application and infrastructure logs apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor-app type: azureMonitor azureMonitor: customerId: my-customer-id logType: application_log 1 secret: name: my-secret - name: azure-monitor-infra type: azureMonitor azureMonitor: customerId: my-customer-id logType: infra_log # secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor-app - name: infra-pipeline inputRefs: - infrastructure outputRefs: - azure-monitor-infra 1 Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. Advanced configuration options apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id logType: my_log_type azureResourceId: "/subscriptions/111111111" 1 host: "ods.opinsights.azure.com" 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor 1 Resource ID of the Azure resource the data should be associated with. Optional field. 2 Alternative host for dedicated Azure regions. Optional field. Default value is ods.opinsights.azure.com . Default value for Azure Government is ods.opinsights.azure.us . 12.4.9. Forwarding application logs from specific projects You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from OpenShift Container Platform. To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: "my-project" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 The name of the output. 4 The output type: elasticsearch , fluentdForward , syslog , or kafka . 5 The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt , tls.key , and ca-bundle.crt keys that each point to the certificates they represent. 7 The configuration for an input to filter application logs from the specified projects. 8 If no namespace is specified, logs are collected from all namespaces. 9 The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named forward-to-fluentd-insecure forwards logs from an input named my-app-logs to an output named fluentd-server-insecure . 10 A list of inputs. 11 The name of the output to use. 12 Optional: String. One or more labels to add to the logs. 13 Configuration for a pipeline to send logs to other log aggregators. Optional: Specify a name for the pipeline. Specify which log types to forward by using the pipeline: application, infrastructure , or audit . Specify the name of the output to use when forwarding logs with this pipeline. Optional: Specify the default output to forward logs to the default log store. Optional: String. One or more labels to add to the logs. 14 Note that application logs from all namespaces are collected when using this configuration. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 12.4.10. Forwarding application logs from specific pods As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector. Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector. To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels , as shown in the following example. Example ClusterLogForwarder CR YAML file apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name> ... 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 Specify one or more comma-separated values from inputs[].name . 4 Specify one or more comma-separated values from outputs[] . 5 Define a unique inputs[].name for each application that has a unique set of pod labels. 6 Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. 7 Optional: Specify one or more namespaces. 8 Specify one or more outputs to forward your log data to. Optional: To restrict the gathering of log data to specific namespaces, use inputs[].name.application.namespaces , as shown in the preceding example. Optional: You can send log data from additional applications that have different pod labels to the same pipeline. For each unique combination of pod labels, create an additional inputs[].name section similar to the one shown. Update the selectors to match the pod labels of this application. Add the new inputs[].name value to inputRefs . For example: Create the CR object: USD oc create -f <file-name>.yaml Additional resources For more information on matchLabels in Kubernetes, see Resources that support set-based requirements . 12.4.11. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, checking stops at the first match. How much data is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. Note You can use this feature only if the Vector collector is set up in your logging deployment. In logging 5.8 and later, the ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, a list of HTTP status code for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site. Note The example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. Additional resources Logging network policy events [Logging for egress firewall and network policy rules] 12.4.12. Forwarding logs to an external Loki logging system You can forward logs to an external Loki logging system in addition to, or instead of, the default log store. To configure log forwarding to Loki, you must create a ClusterLogForwarder custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection. Prerequisites You must have a Loki logging system running at the URL you specify with the url field in the CR. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: "loki" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: "loki" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the type as "loki" . 6 Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki's default port for HTTP(S) communication is 3100. 7 For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret . 8 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificates it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password." 9 Optional: Specify a metadata key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespace_name uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. 10 Optional: Specify a list of metadata field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]* . Illegal characters in metadata keys are replaced with _ to form the label name. For example, the kubernetes.labels.foo metadata key becomes Loki label kubernetes_labels_foo . If you do not set labelKeys , the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host] . Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config . You can still query based on any log record field using query filters. 11 Optional: Specify a name for the pipeline. 12 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 13 Specify the name of the output to use when forwarding logs with this pipeline. Note Because Loki requires log streams to be correctly ordered by timestamp, labelKeys always includes the kubernetes_host label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts. Apply the ClusterLogForwarder CR object by running the following command: USD oc apply -f <filename>.yaml Additional resources Configuring Loki server 12.4.13. Forwarding logs to an external Elasticsearch instance You can forward logs to an external Elasticsearch instance in addition to, or instead of, the internal log store. You are responsible for configuring the external log aggregator to receive log data from OpenShift Container Platform. To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection. To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default output to forward logs to the internal instance. Note If you only want to forward logs to an internal Elasticsearch instance, you do not need to create a ClusterLogForwarder CR. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: "myValue" 13 # ... 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the elasticsearch type. 6 Specify the Elasticsearch version. This can be 6 , 7 , or 8 . 7 Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. 8 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password." 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. 12 Optional: Specify the default output to send the logs to the internal Elasticsearch instance. 13 Optional: String. One or more labels to add to the logs. Apply the ClusterLogForwarder CR: USD oc apply -f <filename>.yaml Example: Setting a secret that contains a username and password You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. Create a Secret YAML file similar to the following example. Use base64-encoded values for the username and password fields. The secret type is opaque by default. apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password> # ... Create the secret: USD oc create secret -n openshift-logging openshift-test-secret.yaml Specify the name of the secret in the ClusterLogForwarder CR: kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret # ... Note In the value of the url field, the prefix can be http or https . Apply the CR object: USD oc apply -f <filename>.yaml 12.4.14. Forwarding logs using the Fluentd forward protocol You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from OpenShift Container Platform. To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: "C1234" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the fluentdForward type. 5 Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents. 7 Optional: Specify a name for the pipeline. 8 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 9 Specify the name of the output to use when forwarding logs with this pipeline. 10 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 11 Optional: String. One or more labels to add to the logs. 12 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <file-name>.yaml 12.4.14.1. Enabling nanosecond precision for Logstash to ingest data from fluentd For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file. Procedure In the Logstash configuration file, set nanosecond_precision to true . Example Logstash configuration file input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } } 12.4.15. Forwarding logs using the syslog protocol You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from OpenShift Container Platform. To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: "true" 13 syslog: "east" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: "west" 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the syslog type. 6 Optional: Specify the syslog parameters, listed below. 7 Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 8 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project. 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. 12 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 13 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 14 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <filename>.yaml 12.4.15.1. Adding log source information to message output You can add namespace_name , pod_name , and container_name elements to the message field of the record by adding the AddLogSource field to your ClusterLogForwarder custom resource (CR). spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout Note This configuration is compatible with both RFC3164 and RFC5424. Example syslog message output without AddLogSource <15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56} Example syslog message output with AddLogSource <15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76} 12.4.15.2. Syslog parameters You can configure the following for the syslog outputs. For more information, see the syslog RFC3164 or RFC5424 RFC. facility: The syslog facility . The value can be a decimal integer or a case-insensitive keyword: 0 or kern for kernel messages 1 or user for user-level messages, the default. 2 or mail for the mail system 3 or daemon for system daemons 4 or auth for security/authentication messages 5 or syslog for messages generated internally by syslogd 6 or lpr for the line printer subsystem 7 or news for the network news subsystem 8 or uucp for the UUCP subsystem 9 or cron for the clock daemon 10 or authpriv for security authentication messages 11 or ftp for the FTP daemon 12 or ntp for the NTP subsystem 13 or security for the syslog audit log 14 or console for the syslog alert log 15 or solaris-cron for the scheduling daemon 16 - 23 or local0 - local7 for locally used facilities Optional: payloadKey : The record field to use as payload for the syslog message. Note Configuring the payloadKey parameter prevents other parameters from being forwarded to the syslog. rfc: The RFC to be used for sending logs using syslog. The default is RFC5424. severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword: 0 or Emergency for messages indicating the system is unusable 1 or Alert for messages indicating action must be taken immediately 2 or Critical for messages indicating critical conditions 3 or Error for messages indicating error conditions 4 or Warning for messages indicating warning conditions 5 or Notice for messages indicating normal but significant conditions 6 or Informational for messages indicating informational messages 7 or Debug for messages indicating debug-level messages, the default tag: Tag specifies a record field to use as a tag on the syslog message. trimPrefix: Remove the specified prefix from the tag. 12.4.15.3. Additional RFC5424 syslog parameters The following parameters apply to RFC5424: appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424 . msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424 . procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424 . 12.4.16. Forwarding logs to a Kafka broker You can forward logs to an external Kafka broker in addition to, or instead of, the default log store. To configure log forwarding to an external Kafka instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: "application" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: "infra" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: "audit" 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the kafka type. 6 Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 7 If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project. 8 Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output. 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. 12 Optional: String. One or more labels to add to the logs. 13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example: # ... spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3 # ... 1 Specify a kafka key that has a brokers and topic key. 2 Use the brokers key to specify an array of one or more brokers. 3 Use the topic key to specify the target topic that receives the logs. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 12.4.17. Forwarding logs to Amazon CloudWatch You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store. To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output. Procedure Create a Secret YAML file that uses the aws_access_key_id and aws_secret_access_key fields to specify your base64-encoded AWS credentials. For example: apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= Create the secret. For example: USD oc apply -f cw-secret.yaml Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the name of the secret. For example: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the cloudwatch type. 6 Optional: Specify how to group the logs: logType creates log groups for each log type. namespaceName creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs. namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs. 7 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. 8 Specify the AWS region. 9 Specify the name of the secret that contains your AWS credentials. 10 Optional: Specify a name for the pipeline. 11 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 12 Specify the name of the output to use when forwarding logs with this pipeline. Create the CR object: USD oc create -f <file-name>.yaml Example: Using ClusterLogForwarder with Amazon CloudWatch Here, you see an example ClusterLogForwarder custom resource (CR) and the log data that it outputs to Amazon CloudWatch. Suppose that you are running an OpenShift Container Platform cluster named mycluster . The following command returns the cluster's infrastructureName , which you will use to compose aws commands later on: USD oc get Infrastructure/cluster -ojson | jq .status.infrastructureName "mycluster-7977k" To generate log data for this example, you run a busybox pod in a namespace called app . The busybox pod writes a message to stdout every three seconds: USD oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done' USD oc logs -f busybox My life is my message My life is my message My life is my message ... You can look up the UUID of the app namespace where the busybox pod runs: USD oc get ns/app -ojson | jq .metadata.uid "794e1e1a-b9f5-4958-a190-e76a9b53d7bf" In your ClusterLogForwarder custom resource (CR), you configure the infrastructure , audit , and application log types as inputs to the all-logs pipeline. You also connect this pipeline to cw output, which forwards the logs to a CloudWatch instance in the us-east-2 region: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw Each region in CloudWatch contains three levels of objects: log group log stream log event With groupBy: logType in the ClusterLogForwarding CR, the three log types in the inputRefs produce three log groups in Amazon Cloudwatch: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.application" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" Each of the log groups contains log streams: USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName "kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log" USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log" ... USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log" ... Each log stream contains log events. To see a log event from the busybox Pod, you specify its log stream from the application log group: USD aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { "events": [ { "timestamp": 1629422704178, "message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}", "ingestionTime": 1629422744016 }, ... Example: Customizing the prefix in log group names In the log group names, you can replace the default infrastructureName prefix, mycluster-7977k , with an arbitrary string like demo-group-prefix . To make this change, you update the groupPrefix field in the ClusterLogForwarding CR: cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2 The value of groupPrefix replaces the default infrastructureName prefix: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "demo-group-prefix.application" "demo-group-prefix.audit" "demo-group-prefix.infrastructure" Example: Naming log groups after application namespace names For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace. If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before. If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead. To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy field to namespaceName in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceName region: us-east-2 Setting groupBy to namespaceName affects the application log group only. It does not affect the audit and infrastructure log groups. In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.app log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.app" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace. The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. Example: Naming log groups after application namespace UUIDs For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace. If you delete an application namespace object and create a new one, CloudWatch creates a new log group. If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead. To name log groups after application namespace UUIDs, you set the value of the groupBy field to namespaceUUID in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceUUID region: us-east-2 In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace "mycluster-7977k.audit" "mycluster-7977k.infrastructure" The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. 12.4.18. Creating a secret for AWS CloudWatch with an existing AWS role If you have an existing role for AWS, you can create a secret for AWS with STS using the oc create secret --from-literal command. Procedure In the CLI, enter the following to generate a secret for AWS: USD oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions Example Secret apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions 12.4.19. Forwarding logs to Amazon CloudWatch from STS enabled clusters For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the Cloud Credential Operator (CCO) utility ccoctl . Prerequisites Logging for Red Hat OpenShift: 5.5 and later Procedure Create a CredentialsRequest custom resource YAML by using the template below: CloudWatch credentials request template apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector Use the ccoctl command to create a role for AWS using your CredentialsRequest CR. With the CredentialsRequest object, this ccoctl command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yaml . This secret file contains the role_arn key/value used during authentication with the AWS IAM identity provider. USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1 1 <name> is the name used to tag your cloud resources and should match the name used during your STS cluster install Apply the secret created: USD oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml Create or edit a ClusterLogForwarder custom resource: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: clf-collector 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 Specify the clf-collector service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the cloudwatch type. 6 Optional: Specify how to group the logs: logType creates log groups for each log type. namespaceName creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by logType . namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs. 7 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. 8 Specify the AWS region. 9 Specify the name of the secret that contains your AWS credentials. 10 Optional: Specify a name for the pipeline. 11 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 12 Specify the name of the output to use when forwarding logs with this pipeline. Additional resources AWS STS API Reference Cloud Credential Operator (CCO) 12.5. Configuring the logging collector Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogging custom resource (CR). 12.5.1. Configuring the log collector You can configure which log collector type your logging uses by modifying the ClusterLogging custom resource (CR). Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator. You have created a ClusterLogging CR. Procedure Modify the ClusterLogging CR collection spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... collection: type: <log_collector_type> 1 resources: {} tolerations: {} # ... 1 The log collector type you want to use for the logging. This can be vector or fluentd . Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 12.5.2. Creating a LogFileMetricExporter resource In logging version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs . Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator. You have installed the OpenShift CLI ( oc ). Procedure Create a LogFileMetricExporter CR as a YAML file: Example LogFileMetricExporter CR apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3 # ... 1 Optional: The nodeSelector stanza defines which nodes the pods are scheduled on. 2 The resources stanza defines resource requirements for the LogFileMetricExporter CR. 3 Optional: The tolerations stanza defines the tolerations that the pods accept. Apply the LogFileMetricExporter CR by running the following command: USD oc apply -f <filename>.yaml Verification A logfilesmetricexporter pod runs concurrently with a collector pod on each node. Verify that the logfilesmetricexporter pods are running in the namespace where you have created the LogFileMetricExporter CR, by running the following command and observing the output: USD oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging Example output NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s 12.5.3. Configure log collector CPU and memory limits The log collector allows for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi # ... 1 Specify the CPU and memory limits and requests as needed. The values shown are the default values. 12.5.4. Configuring input receivers The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. The service name is generated based on the following: For multi log forwarder ClusterLogForwarder CR deployments, the service name is in the format <ClusterLogForwarder_CR_name>-<input_name> . For example, example-http-receiver . For legacy ClusterLogForwarder CR deployments, meaning those named instance and located in the openshift-logging namespace, the service name is in the format collector-<input_name> . For example, collector-http-receiver . 12.5.4.1. Configuring the collector to receive audit logs as an HTTP server You can configure your log collector to listen for HTTP connections and receive audit logs as an HTTP server by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR). This enables you to use a common log store for audit logs that are collected from both inside and outside of your OpenShift Container Platform cluster. Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator. You have created a ClusterLogForwarder CR. Procedure Modify the ClusterLogForwarder CR to add configuration for the http receiver input: Example ClusterLogForwarder CR if you are using a multi log forwarder deployment apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver # ... 1 Specify a name for your input receiver. 2 Specify the input receiver type as http . 3 Currently, only the kube-apiserver webhook format is supported for http input receivers. 4 Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535 . The default value is 8443 if this is not specified. 5 Configure a pipeline for your input receiver. Example ClusterLogForwarder CR if you are using a legacy deployment apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline # ... 1 Specify a name for your input receiver. 2 Specify the input receiver type as http . 3 Currently, only the kube-apiserver webhook format is supported for http input receivers. 4 Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535 . The default value is 8443 if this is not specified. 5 Configure a pipeline for your input receiver. Apply the changes to the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional resources Overview of API audit filter 12.5.5. Advanced configuration for the Fluentd log forwarder Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors: Chunk and chunk buffer sizes Chunk flushing behavior Chunk forwarding retry behavior Fluentd collects log data in a single blob called a chunk . When Fluentd creates a chunk, the chunk is considered to be in the stage , where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue , where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured. By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. These parameters can help you determine the trade-offs between latency and throughput. To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries. You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd. Note These parameters are: Not relevant to most users. The default settings should give good general performance. Only for advanced users with detailed knowledge of Fluentd configuration and performance. Only for performance tuning. They have no effect on functional aspects of logging. Table 12.11. Advanced Fluentd Configuration Parameters Parameter Description Default chunkLimitSize The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. 8m totalLimitSize The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. Approximately 15% of the node disk distributed across all outputs. flushInterval The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days). 1s flushMode The method to perform flushes: lazy : Flush chunks based on the timekey parameter. You cannot modify the timekey parameter. interval : Flush chunks based on the flushInterval parameter. immediate : Flush chunks immediately after data is added to a chunk. interval flushThreadCount The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. 2 overflowAction The chunking behavior when the queue is full: throw_exception : Raise an exception to show in the log. block : Stop data chunking until the full buffer issue is resolved. drop_oldest_chunk : Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks. block retryMaxInterval The maximum time in seconds for the exponential_backoff retry method. 300s retryType The retry method when flushing fails: exponential_backoff : Increase the time between flush retries. Fluentd doubles the time it waits until the retry until the retry_max_interval parameter is reached. periodic : Retries flushes periodically, based on the retryWait parameter. exponential_backoff retryTimeOut The maximum time interval to attempt retries before the record is discarded. 60m retryWait The time in seconds before the chunk flush. 1s For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Add or modify any of the following parameters: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: "300s" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9 # ... 1 Specify the maximum size of each chunk before it is queued for flushing. 2 Specify the interval between chunk flushes. 3 Specify the method to perform chunk flushes: lazy , interval , or immediate . 4 Specify the number of threads to use for chunk flushes. 5 Specify the chunking behavior when the queue is full: throw_exception , block , or drop_oldest_chunk . 6 Specify the maximum interval in seconds for the exponential_backoff chunk flushing method. 7 Specify the retry type when chunk flushing fails: exponential_backoff or periodic . 8 Specify the time in seconds before the chunk flush. 9 Specify the maximum size of the chunk buffer. Verify that the Fluentd pods are redeployed: USD oc get pods -l component=collector -n openshift-logging Check that the new values are in the fluentd config map: USD oc extract configmap/collector-config --confirm Example fluentd.conf <buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size "#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer> 12.6. Collecting and storing Kubernetes events The OpenShift Container Platform Event Router is a pod that watches Kubernetes events and logs them for collection by the logging. You must manually deploy the Event Router. The Event Router collects events from all projects and writes them to STDOUT . The collector then forwards those events to the store defined in the ClusterLogForwarder custom resource (CR). Important The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed. 12.6.1. Deploying and configuring the Event Router Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the openshift-logging project to ensure it collects events from across the cluster. Note The Event Router image is not a part of the Red Hat OpenShift Logging Operator and must be downloaded separately. The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router pod. You can either use this template without making changes or edit the template to change the deployment object CPU and memory requests. Prerequisites You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the cluster-admin role. The Red Hat OpenShift Logging Operator must be installed. Procedure Create a template for the Event Router: apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: "A pod forwarding kubernetes events to OpenShift Logging stack." tags: "events,EFK,logging,cluster-logging" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "watch", "list"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { "sink": "stdout" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" spec: selector: matchLabels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" replicas: 1 template: metadata: labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: "registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4" - name: CPU 7 displayName: CPU value: "100m" - name: MEMORY 8 displayName: Memory value: "128Mi" - name: NAMESPACE displayName: Namespace value: "openshift-logging" 9 1 Creates a Service Account in the openshift-logging project for the Event Router. 2 Creates a ClusterRole to monitor for events in the cluster. 3 Creates a ClusterRoleBinding to bind the ClusterRole to the service account. 4 Creates a config map in the openshift-logging project to generate the required config.json file. 5 Creates a deployment in the openshift-logging project to generate and configure the Event Router pod. 6 Specifies the image, identified by a tag such as v0.4 . 7 Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to 100m . 8 Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to 128Mi . 9 Specifies the openshift-logging project to install objects in. Use the following command to process and apply the template: USD oc process -f <templatefile> | oc apply -n openshift-logging -f - For example: USD oc process -f eventrouter.yaml | oc apply -n openshift-logging -f - Example output serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created Validate that the Event Router installed in the openshift-logging project: View the new Event Router pod: USD oc get pods --selector component=eventrouter -o name -n openshift-logging Example output pod/cluster-logging-eventrouter-d649f97c8-qvv8r View the events collected by the Event Router: USD oc logs <cluster_logging_eventrouter_pod> -n openshift-logging For example: USD oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging Example output {"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}} You can also use Kibana to view events by creating an index pattern using the Elasticsearch infra index.
[ "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {}", "oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>", "{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}", "pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json", "{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }", "{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json", "oc create -f <filename>.yaml", "oc delete pod --selector logging-infra=collector", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json", "apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"", "oc create secret generic -n <namespace> <secret_name> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: tuning: delivery: AtLeastOnce 1 compression: none 2 maxWrite: <integer> 3 minRetryDuration: 1s 4 maxRetryDuration: 1s 5", "java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true", "[transforms.detect_exceptions_app-logs] type = \"detect_exceptions\" inputs = [\"application\"] languages = [\"All\"] group_by = [\"kubernetes.namespace_name\",\"kubernetes.pod_name\",\"kubernetes.container_name\"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000", "<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>", "oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json= <your_service_account_key_file.json>", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: gcp-1 type: googleCloudLogging secret: name: gcp-secret googleCloudLogging: projectId : \"openshift-gce-devel\" 4 logId : \"app-gcp\" 5 pipelines: - name: test-app inputRefs: 6 - application outputRefs: - gcp-1", "oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app 8", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: openshift-logging type: Opaque data: shared_key: <your_shared_key> 1", "Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName \"<resource_name>\" -Name \"<workspace_name>\"", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id 1 logType: my_log_type 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor-app type: azureMonitor azureMonitor: customerId: my-customer-id logType: application_log 1 secret: name: my-secret - name: azure-monitor-infra type: azureMonitor azureMonitor: customerId: my-customer-id logType: infra_log # secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor-app - name: infra-pipeline inputRefs: - infrastructure outputRefs: - azure-monitor-infra", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id logType: my_log_type azureResourceId: \"/subscriptions/111111111\" 1 host: \"ods.opinsights.azure.com\" 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name>", "- inputRefs: [ myAppLogData, myOtherAppLogData ]", "oc create -f <file-name>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: \"loki\" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: \"myValue\" 13", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>", "oc create secret -n openshift-logging openshift-test-secret.yaml", "kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: \"C1234\" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"", "oc create -f <file-name>.yaml", "input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"", "oc create -f <filename>.yaml", "spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout", "<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}", "<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: \"application\" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: \"audit\"", "spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=", "oc apply -f cw-secret.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12", "oc create -f <file-name>.yaml", "oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"", "oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message", "oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"", "aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },", "cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"", "cloudwatch: groupBy: namespaceName region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "cloudwatch: groupBy: namespaceUUID region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions", "apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <your_role_name>-credrequest namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - logs:PutLogEvents - logs:CreateLogGroup - logs:PutRetentionPolicy - logs:CreateLogStream - logs:DescribeLogGroups - logs:DescribeLogStreams effect: Allow resource: arn:aws:logs:*:*:* secretRef: name: <your_role_name> namespace: openshift-logging serviceAccountNames: - logcollector", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com 1", "oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: clf-collector 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3", "oc apply -f <filename>.yaml", "oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging", "NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline", "oc apply -f <filename>.yaml", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9", "oc get pods -l component=collector -n openshift-logging", "oc extract configmap/collector-config --confirm", "<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size \"#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}\" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>", "apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9", "oc process -f <templatefile> | oc apply -n openshift-logging -f -", "oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -", "serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created", "oc get pods --selector component=eventrouter -o name -n openshift-logging", "pod/cluster-logging-eventrouter-d649f97c8-qvv8r", "oc logs <cluster_logging_eventrouter_pod> -n openshift-logging", "oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging", "{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/log-collection-and-forwarding-2
Chapter 10. Managing alerts
Chapter 10. Managing alerts In OpenShift Container Platform 4.12, the Alerting UI enables you to manage alerts, silences, and alerting rules. Alerting rules . Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Alerts . An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an OpenShift Container Platform cluster. Silences . A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin role, you can access all alerts, silences, and alerting rules. 10.1. Accessing the Alerting UI in the Administrator and Developer perspectives The Alerting UI is accessible through the Administrator perspective and the Developer perspective of the OpenShift Container Platform web console. In the Administrator perspective, go to Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting rules pages. In the Developer perspective, go to Observe <project_name> Alerts . In this perspective, alerts, silences, and alerting rules are all managed from the Alerts page. The results shown in the Alerts page are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you do not have cluster-admin privileges. 10.2. Searching and filtering alerts, silences, and alerting rules You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. Understanding alert filters In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option: State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Severity filters: Critical . The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team. Warning . The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review. Info . The alert is provided for informational purposes only. None . The alert has no defined severity. You can also create custom severity definitions for alerts relating to user-defined projects. Source filters: Platform . Platform-level alerts relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. Understanding silence filters In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default OpenShift Container Platform and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option: State filters: Active . The silence is active and the alert will be muted until the silence is expired. Pending . The silence has been scheduled and it is not yet active. Expired . The silence has expired and notifications will be sent if the conditions for an alert are true. Understanding alerting rule filters In the Administrator perspective, the Alerting rules page in the Alerting UI provides details about alerting rules relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option: Alert state filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert continues to fire while the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications are not sent for alerts that match all the listed values or regular expressions. Not Firing . The alert is not firing. Severity filters: Critical . The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team. Warning . The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review. Info . The alerting rule provides informational alerts only. None . The alerting rule has no defined severity. You can also create custom severity definitions for alerting rules relating to user-defined projects. Source filters: Platform . Platform-level alerting rules relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled postinstallation to provide observability into your own workloads. Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective. 10.3. Getting information about alerts, silences, and alerting rules The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts in the Administrator perspective : Open the OpenShift Container Platform web console and go to the Observe Alerting Alerts page. Optional: Search for alerts by name by using the Name field in the search list. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerts by clicking one or more of the Name , Severity , State , and Source column headers. Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert: A description of the alert Messages associated with the alert Labels attached to the alert A link to its governing alerting rule Silences for the alert, if any exist To obtain information about silences in the Administrator perspective : Go to the Observe Alerting Silences page. Optional: Filter the silences by name using the Search by name field. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied. Optional: Sort the silences by clicking one or more of the Name , Firing alerts , State , and Creator column headers. Select the name of a silence to view its Silence details page. The page includes the following details: Alert specification Start time End time Silence state Number and list of firing alerts To obtain information about alerting rules in the Administrator perspective : Go to the Observe Alerting Alerting rules page. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerting rules by clicking one or more of the Name , Severity , Alert state , and Source column headers. Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule: Alerting rule name, severity, and description. The expression that defines the condition for firing the alert. The time for which the condition should be true for an alert to fire. A graph for each alert governed by the alerting rule, showing the value with which the alert is firing. A table of all alerts governed by the alerting rule. To obtain information about alerts, silences, and alerting rules in the Developer perspective : Go to the Observe <project_name> Alerts page. View details for an alert, silence, or an alerting rule: Alert details can be viewed by clicking a greater than symbol ( > ) to an alert name and then selecting the alert from the list. Silence details can be viewed by clicking a silence in the Silenced by section of the Alert details page. The Silence details page includes the following information: Alert specification Start time End time Silence state Number and list of firing alerts Alerting rule details can be viewed by clicking the menu to an alert in the Alerts page and then clicking View Alerting Rule . Note Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective. Additional resources See the Cluster Monitoring Operator runbooks to help diagnose and resolve issues that trigger specific OpenShift Container Platform monitoring alerts. 10.4. Managing silences You can create a silence to stop receiving notifications about an alert when it is firing. It might be useful to silence an alert after being first notified, while you resolve the underlying issue. When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires. You can view, edit, and expire existing silences. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. Additional resources Configuring persistent storage 10.4.1. Silencing alerts You can either silence a specific alert or silence alerts that match a specification that you define. Prerequisites You are a cluster administrator and have access to the cluster as a user with the cluster-admin cluster role. You are a non-administator user and have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure To silence a specific alert: In the Administrator perspective: Navigate to the Observe Alerting Alerts page of the OpenShift Container Platform web console. For the alert that you want to silence, select the in the right-hand column and select Silence Alert . The Silence Alert form will appear with a pre-populated specification for the chosen alert. Optional: Modify the silence. You must add a comment before creating the silence. To create the silence, select Silence . In the Developer perspective: Navigate to the Observe <project_name> Alerts page in the OpenShift Container Platform web console. Expand the details for an alert by selecting a greater than symbol ( > ) to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select Silence Alert . The Silence Alert form will appear with a prepopulated specification for the chosen alert. Optional: Modify the silence. You must add a comment before creating the silence. To create the silence, select Silence . To silence a set of alerts by creating an alert specification in the Administrator perspective: Navigate to the Observe Alerting Silences page in the OpenShift Container Platform web console. Select Create Silence . Set the schedule, duration, and label details for an alert in the Create Silence form. You must also add a comment for the silence. To create silences for alerts that match the label sectors that you entered in the step, select Silence . 10.4.2. Editing silences You can edit a silence, which will expire the existing silence and create a new one with the changed configuration. Procedure To edit a silence in the Administrator perspective: Navigate to the Observe Alerting Silences page. For the silence you want to modify, select the in the last column and choose Edit silence . Alternatively, you can select Actions Edit Silence in the Silence Details page for a silence. In the Edit Silence page, enter your changes and select Silence . This will expire the existing silence and create one with the chosen configuration. To edit a silence in the Developer perspective: Navigate to the Observe <project_name> Alerts page. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence. Select the name of a silence to navigate to its Silence Details page. Select Actions Edit Silence in the Silence Details page for a silence. In the Edit Silence page, enter your changes and select Silence . This will expire the existing silence and create one with the chosen configuration. 10.4.3. Expiring silences You can expire a silence. Expiring a silence deactivates it forever. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Procedure To expire a silence in the Administrator perspective: Navigate to the Observe Alerting Silences page. For the silence you want to modify, select the in the last column and choose Expire silence . Alternatively, you can select Actions Expire Silence in the Silence Details page for a silence. To expire a silence in the Developer perspective: Navigate to the Observe <project_name> Alerts page. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence. Select the name of a silence to navigate to its Silence Details page. Select Actions Expire Silence in the Silence Details page for a silence. 10.5. Managing alerting rules for user-defined projects OpenShift Container Platform monitoring ships with a set of default alerting rules. As a cluster administrator, you can view the default alerting rules. In OpenShift Container Platform 4.12, you can create, view, edit, and remove alerting rules in user-defined projects. Alerting rule considerations The default alerting rules are used specifically for the OpenShift Container Platform cluster. Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both. Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing. 10.5.1. Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: Minimize the number of alerting rules that you create for your project . Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you. Create alerting rules for symptoms instead of causes . Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed. Plan before you write your alerting rules . Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom. Provide clear alert messaging . State the symptom and recommended actions in the alert message. Include severity levels in your alerting rules . The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team. Additional resources See the Prometheus alerting documentation for further guidelines on optimizing alerts See Monitoring overview for details about OpenShift Container Platform 4.12 monitoring architecture 10.5.2. About creating alerting rules for user-defined projects If you create alerting rules for a user-defined project, consider the following key behaviors and important limitations when you define the new rules: A user-defined alerting rule can include metrics exposed by its own project in addition to the default metrics from core platform monitoring. You cannot include metrics from another user-defined project. For example, an alerting rule for the ns1 user-defined project can use metrics exposed by the ns1 project in addition to core platform metrics, such as CPU and memory metrics. However, the rule cannot include metrics from a different ns2 user-defined project. To reduce latency and to minimize the load on core platform monitoring components, you can add the openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus label to a rule. This label forces only the Prometheus instance deployed in the openshift-user-workload-monitoring project to evaluate the alerting rule and prevents the Thanos Ruler instance from doing so. Important If an alerting rule has this label, your alerting rule can use only those metrics exposed by your user-defined project. Alerting rules you create based on default platform metrics might not trigger alerts. 10.5.3. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Note When you create an alerting rule, a project label is enforced on it even if a rule with the same name exists in another project. To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job="prometheus-example-app"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5 1 The name of the alerting rule you want to create. 2 The duration for which the condition should be true before an alert is fired. 3 The PromQL query expression that defines the new rule. 4 The severity that alerting rule assigns to the alert. 5 The message associated with the alert. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml See Monitoring overview for details about OpenShift Container Platform 4.12 monitoring architecture. 10.5.4. Accessing alerting rules for user-defined projects To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view cluster role for the project. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-view cluster role for your project. You have installed the OpenShift CLI ( oc ). Procedure You can list alerting rules in <project> : USD oc -n <project> get prometheusrule To list the configuration of an alerting rule, run the following: USD oc -n <project> get prometheusrule <rule> -o yaml 10.5.5. Listing alerting rules for all projects in a single view As a cluster administrator, you can list alerting rules for core OpenShift Container Platform and user-defined projects together in a single view. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Alerting Alerting Rules . Select the Platform and User sources in the Filter drop-down menu. Note The Platform source is selected by default. 10.5.6. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> Additional resources See the Alertmanager documentation 10.6. Managing alerting rules for core platform monitoring Important Creating and modifying alerting rules for core platform monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.12 monitoring ships with a large set of default alerting rules for platform metrics. As a cluster administrator, you can customize this set of rules in two ways: Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. For example, you can change the severity label for an alert from warning to critical to help you route and triage issues flagged by an alert. Define and add new custom alerting rules by constructing a query expression based on core platform metrics in the openshift-monitoring namespace. Core platform alerting rule considerations New alerting rules must be based on the default OpenShift Container Platform monitoring metrics. You must create the AlertingRule and AlertRelabelConfig objects in the openshift-monitoring namespace. You can only add and modify alerting rules. You cannot create new recording rules or modify existing recording rules. If you modify existing platform alerting rules by using an AlertRelabelConfig object, your modifications are not reflected in the Prometheus alerts API. Therefore, any dropped alerts still appear in the OpenShift Container Platform web console even though they are no longer forwarded to Alertmanager. Additionally, any modifications to alerts, such as a changed severity label, do not appear in the web console. 10.6.1. Modifying core platform alerting rules As a cluster administrator, you can modify core platform alerts before Alertmanager routes them to a receiver. For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have enabled Technology Preview features, and all nodes in the cluster are ready. Procedure Create a new YAML configuration file named example-modified-alerting-rule.yaml . Add an AlertRelabelConfig resource to the YAML file. The following example modifies the severity setting to critical for the default platform watchdog alerting rule: apiVersion: monitoring.openshift.io/v1alpha1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: "Watchdog;none" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6 1 Ensure that the namespace is openshift-monitoring . 2 The source labels for the values you want to modify. 3 The regular expression against which the value of sourceLabels is matched. 4 The target label of the value you want to modify. 5 The new value to replace the target label. 6 The relabel action that replaces the old value based on regex matching. The default action is Replace . Other possible values are Keep , Drop , HashMod , LabelMap , LabelDrop , and LabelKeep . Important You must create the AlertRelabelConfig object in the openshift-monitoring namespace. Otherwise, the alert label will not change. Apply the configuration file to the cluster: USD oc apply -f example-modified-alerting-rule.yaml 10.6.2. Creating new alerting rules As a cluster administrator, you can create new alerting rules based on platform metrics. These alerting rules trigger alerts based on the values of chosen metrics. Note If you create a customized AlertingRule resource based on an existing platform alerting rule, silence the original alert to avoid receiving conflicting alerts. To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have access to the cluster as a user that has the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have enabled Technology Preview features, and all nodes in the cluster are ready. Procedure Create a new YAML configuration file named example-alerting-rule.yaml . Add an AlertingRule resource to the YAML file. The following example creates a new alerting rule named example , similar to the default Watchdog alert: apiVersion: monitoring.openshift.io/v1alpha1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6 1 Ensure that the namespace is openshift-monitoring . 2 The name of the alerting rule you want to create. 3 The duration for which the condition should be true before an alert is fired. 4 The PromQL query expression that defines the new rule. 5 The severity that alerting rule assigns to the alert. 6 The message associated with the alert. Important You must create the AlertingRule object in the openshift-monitoring namespace. Otherwise, the alerting rule is not accepted. Apply the configuration file to the cluster: USD oc apply -f example-alerting-rule.yaml Additional resources See Monitoring overview for details about OpenShift Container Platform 4.12 monitoring architecture. See the Alertmanager documentation for information about alerting rules. See the Prometheus relabeling documentation for information about how relabeling works. See the Prometheus alerting documentation for further guidelines on optimizing alerts. 10.7. Sending notifications to external systems In OpenShift Container Platform 4.12, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 10.7.1. Configuring alert receivers You can configure alert receivers to ensure that you learn about important issues with your cluster. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure In the Administrator perspective, go to Administration Cluster Settings Configuration Alertmanager . Note Alternatively, you can go to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. Click Create Receiver in the Receivers section of the page. In the Create Receiver form, add a Receiver name and choose a Receiver type from the list. Edit the receiver configuration: For PagerDuty receivers: Choose an integration type and add a PagerDuty integration key. Add the URL of your PagerDuty installation. Click Show advanced configuration if you want to edit the client and incident details or the severity specification. For webhook receivers: Add the endpoint to send HTTP POST requests to. Click Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver. For email receivers: Add the email address to send notifications to. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details. Select whether TLS is required. Click Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration. For Slack receivers: Add the URL of the Slack webhook. Add the Slack channel or user name to send notifications to. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames. By default, firing alerts with labels that match all of the selectors are sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver, perform the following steps: Add routing label names and values in the Routing labels section of the form. Select Regular expression if want to use a regular expression. Click Add label to add further routing labels. Click Create to create the receiver. 10.7.2. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts. 10.7.3. Creating alert routing for user-defined projects If you are a non-administrator user who has been given the alert-routing-edit cluster role, you can create or edit alert routing for user-defined projects. Prerequisites A cluster administrator has enabled monitoring for user-defined projects. A cluster administrator has enabled alert routing for user-defined projects. You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml . Add an AlertmanagerConfig YAML definition to the file. For example: apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post Note For user-defined alerting rules, user-defined routing is scoped to the namespace in which the resource is defined. For example, a routing configuration defined in the AlertmanagerConfig object for namespace ns1 only applies to PrometheusRules resources in the same namespace. Save the file. Apply the resource to the cluster: USD oc apply -f example-app-alert-routing.yaml The configuration is automatically applied to the Alertmanager pods. 10.8. Configuring Alertmanager to send notifications You can configure Alertmanager to send notifications by editing the alertmanager-main secret for default platform alerts or alertmanager-user-workload secret for user-defined alerts. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration . 10.8.1. Configuring notifications for default platform alerts You can configure Alertmanager to send notifications. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the alertmanager-main secret in the openshift-monitoring namespace. Important Alertmanager does not send notifications by default. It is recommended to configure Alertmanager to receive notifications by setting up notifications details in the alertmanager-main secret configuration file. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure Open the Alertmanager YAML configuration file: To open the Alertmanager configuration from the CLI: Print the currently active Alertmanager configuration from the alertmanager-main secret into alertmanager.yaml file: USD oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Open the alertmanager.yaml file. To open the Alertmanager configuration from the OpenShift Container Platform web console: Go to the Administration Cluster Settings Configuration Alertmanager YAML page of the web console. Edit the Alertmanager configuration by updating parameters in the YAML: global: resolve_timeout: 5m route: group_wait: 30s 1 group_interval: 5m 2 repeat_interval: 12h 3 receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: - "service=<your_service>" 4 routes: - matchers: - <your_matching_rules> 5 receiver: <receiver> 6 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 7 1 Specify how long Alertmanager waits while collecting initial alerts for a group of alerts before sending a notification. 2 Specify how much time must elapse before Alertmanager sends a notification about new alerts added to a group of alerts for which an initial notification was already sent. 3 Specify the minimum amount of time that must pass before an alert notification is repeated. If you want a notification to repeat at each group interval, set the repeat_interval value to less than the group_interval value. The repeated notification can still be delayed, for example, when certain Alertmanager pods are restarted or rescheduled. 4 Specify the name of the service that fires the alerts. 5 Specify labels to match your alerts. 6 Specify the name of the receiver to use for the alerts. 7 Specify the receiver configuration. Important Use the matchers key name to indicate the matchers that an alert has to fulfill to match the node. Do not use the match or match_re key names, which are both deprecated and planned for removal in a future release. If you define inhibition rules, use the following key names: target_matchers : to indicate the target matchers source_matchers : to indicate the source matchers Do not use the target_match , target_match_re , source_match , or source_match_re key names, which are deprecated and planned for removal in a future release. The following Alertmanager configuration example configures PagerDuty as an alert receiver: global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 2m receiver: watchdog - matchers: - "service=example-app" routes: - matchers: - "severity=critical" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: "<your_key>" With this configuration, alerts of critical severity that are fired by the example-app service are sent through the team-frontend-page receiver. Typically, these types of alerts would be paged to an individual or a critical response team. Apply the new configuration in the file: To apply the changes from the CLI, run the following command: USD oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=- To apply the changes from the OpenShift Container Platform web console, click Save . 10.8.2. Configuring notifications for user-defined alerts If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. Procedure Print the currently active Alertmanager configuration into the file alertmanager.yaml : USD oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3 1 Specify labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label. 2 Specify the name of the receiver to use for the alerts group. 3 Specify the receiver configuration. Apply the new configuration in the file: USD oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=- 10.9. Additional resources PagerDuty official site PagerDuty Prometheus Integration Guide Support version matrix for monitoring components Enabling alert routing for user-defined projects
[ "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n <project> get prometheusrule", "oc -n <project> get prometheusrule <rule> -o yaml", "oc -n <namespace> delete prometheusrule <foo>", "apiVersion: monitoring.openshift.io/v1alpha1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: \"Watchdog;none\" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6", "oc apply -f example-modified-alerting-rule.yaml", "apiVersion: monitoring.openshift.io/v1alpha1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6", "oc apply -f example-alerting-rule.yaml", "apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post", "oc apply -f example-app-alert-routing.yaml", "oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "global: resolve_timeout: 5m route: group_wait: 30s 1 group_interval: 5m 2 repeat_interval: 12h 3 receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=<your_service>\" 4 routes: - matchers: - <your_matching_rules> 5 receiver: <receiver> 6 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration> 7", "global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \"<your_key>\"", "oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-", "oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3", "oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/managing-alerts
Chapter 5. Managing Storage
Chapter 5. Managing Storage If there is one thing that takes up the majority of a system administrator's day, it would have to be storage management. It seems that disks are always running out of free space, becoming overloaded with too much I/O activity, or failing unexpectedly. Therefore, it is vital to have a solid working knowledge of disk storage in order to be a successful system administrator. 5.1. An Overview of Storage Hardware Before managing storage, it is first necessary to understand the hardware on which data is stored. Unless you have at least some knowledge about mass storage device operation, you may find yourself in a situation where you have a storage-related problem, but you lack the background knowledge necessary to interpret what you are seeing. By gaining some insight into how the underlying hardware operates, you should be able to more easily determine whether your computer's storage subsystem is operating properly. The vast majority of all mass-storage devices use some sort of rotating media and supports the random access of data on that media. This means that the following components are present in some form within nearly every mass storage device: Disk platters Data reading/writing device Access arms The following sections explore each of these components in more detail. 5.1.1. Disk Platters The rotating media used by nearly all mass storage devices are in the form of one or more flat, circularly-shaped platters. The platter may be composed of any number of different materials, such aluminum, glass, and polycarbonate. The surface of each platter is treated in such a way as to enable data storage. The exact nature of the treatment depends on the data storage technology to be used. The most common data storage technology is based on the property of magnetism; in these cases the platters are covered with a compound that exhibits good magnetic characteristics. Another common data storage technology is based on optical principles; in these cases, the platters are covered with materials whose optical properties can be modified, thereby allowing data to be stored optically [14] . No matter what data storage technology is in use, the disk platters are spun, causing their entire surface to sweep past another component -- the data reading/writing device. 5.1.2. Data reading/writing device The data reading/writing device is the component that takes the bits and bytes on which a computer system operates and turns them into the magnetic or optical variations necessary to interact with the materials coating the surface of the disk platters. Sometimes the conditions under which these devices must operate are challenging. For instance, in magnetically-based mass storage the read/write devices (known as heads ) must be very close to the surface of the platter. However, if the head and the surface of the disk platter were to touch, the resulting friction would do severe damage to both the head and the platter. Therefore, the surfaces of both the head and the platter are carefully polished, and the head uses air pressure developed by the spinning platters to float over the platter's surface, "flying" at an altitude less than the thickness of a human hair. This is why magnetic disk drives are sensitive to shock, sudden temperature changes, and any airborne contamination. The challenges faced by optical heads are somewhat different than for magnetic heads -- here, the head assembly must remain at a relatively constant distance from the surface of the platter. Otherwise, the lenses used to focus on the platter does not produce a sufficiently sharp image. In either case, the heads use a very small amount of the platter's surface area for data storage. As the platter spins below the heads, this surface area takes the form of a very thin circular line. If this was how mass storage devices worked, it would mean that over 99% of the platter's surface area would be wasted. Additional heads could be mounted over the platter, but to fully utilize the platter's surface area more than a thousand heads would be necessary. What is required is some method of moving the head over the surface of the platter. 5.1.3. Access Arms By using a head attached to an arm that is capable of sweeping over the platter's entire surface, it is possible to fully utilize the platter for data storage. However, the access arm must be capable of two things: Moving very quickly Moving very precisely The access arm must move as quickly as possible, because the time spent moving the head from one position to another is wasted time. That is because no data can be read or written until the access arm stops moving [15] . The access arm must be able to move with great precision because, as stated earlier, the surface area used by the heads is very small. Therefore, to efficiently use the platter's storage capacity, it is necessary to move the heads only enough to ensure that any data written in the new position does not overwrite data written at a position. This has the affect of conceptually dividing the platter's surface into a thousand or more concentric "rings" or tracks . Movement of the access arm from one track to another is often referred to as seeking , and the time it takes the access arms to move from one track to another is known as the seek time . Where there are multiple platters (or one platter with both surfaces used for data storage), the arms for each surface are stacked, allowing the same track on each surface to be accessed simultaneously. If the tracks for each surface could be visualized with the access stationary over a given track, they would appear to be stacked one on top of another, making up a cylindrical shape; therefore, the set of tracks accessible at a certain position of the access arms are known as a cylinder . [14] Some optical devices -- notably CD-ROM drives -- use somewhat different approaches to data storage; these differences are pointed out at the appropriate points within the chapter. [15] In some optical devices (such as CD-ROM drives), the access arm is continually moving, causing the head assembly to describe a spiral path over the surface of the platter. This is a fundamental difference in how the storage medium is used and reflects the CD-ROM's origins as a medium for music storage, where continuous data retrieval is a more common operation than searching for a specific data point.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/ch-storage
4.3. Volume Group Administration
4.3. Volume Group Administration This section describes the commands that perform the various aspects of volume group administration. 4.3.1. Creating Volume Groups To create a volume group from one or more physical volumes, use the vgcreate command. The vgcreate command creates a new volume group by name and adds at least one physical volume to it. The following command creates a volume group named vg1 that contains physical volumes /dev/sdd1 and /dev/sde1 . When physical volumes are used to create a volume group, its disk space is divided into 4MB extents, by default. This extent is the minimum amount by which the logical volume may be increased or decreased in size. Large numbers of extents will have no impact on I/O performance of the logical volume. You can specify the extent size with the -s option to the vgcreate command if the default extent size is not suitable. You can put limits on the number of physical or logical volumes the volume group can have by using the -p and -l arguments of the vgcreate command. By default, a volume group allocates physical extents according to common-sense rules such as not placing parallel stripes on the same physical volume. This is the normal allocation policy. You can use the --alloc argument of the vgcreate command to specify an allocation policy of contiguous , anywhere , or cling . The contiguous policy requires that new extents are adjacent to existing extents. If there are sufficient free extents to satisfy an allocation request but a normal allocation policy would not use them, the anywhere allocation policy will, even if that reduces performance by placing two stripes on the same physical volume. The cling policy places new extents on the same physical volume as existing extents in the same stripe of the logical volume. These policies can be changed using the vgchange command. In general, allocation policies other than normal are required only in special cases where you need to specify unusual or nonstandard extent allocation. LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout: For example, if you create two volume groups myvg1 and myvg2 , each with three logical volumes named lvo1 , lvo2 , and lvo3 , this create six device special files: The maximum device size with LVM is 8 Exabytes on 64-bit CPUs.
[ "vgcreate vg1 /dev/sdd1 /dev/sde1", "/dev/ vg / lv /", "/dev/myvg1/lv01 /dev/myvg1/lv02 /dev/myvg1/lv03 /dev/myvg2/lv01 /dev/myvg2/lv02 /dev/myvg2/lv03" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_admin
Chapter 3. Installation configuration parameters for IBM Z and IBM LinuxONE
Chapter 3. Installation configuration parameters for IBM Z and IBM LinuxONE Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 3.1. Available installation configuration parameters for IBM Z The following tables specify the required, optional, and IBM Z-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Consider the following information before you configure network parameters for your cluster: If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you deployed nodes in an OpenShift Container Platform cluster with a network that supports both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. This ensures that in a multiple network interface controller (NIC) environment, a cluster can detect what NIC to use based on the available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-stack limitations" in About the OVN-Kubernetes network plugin . To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host that supports dual-stack networking. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Table 3.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. .
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_z_and_ibm_linuxone/installation-config-parameters-ibm-z