title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 21. KafkaAuthorizationSimple schema reference | Chapter 21. KafkaAuthorizationSimple schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationSimple schema properties For simple authorization, Streams for Apache Kafka uses Kafka's built-in authorization plugins: the StandardAuthorizer for KRaft mode and the AclAuthorizer for ZooKeeper-based cluster management. ACLs allow you to define which users have access to which resources at a granular level. Configure the Kafka custom resource to use simple authorization. Set the type property in the authorization section to the value simple , and configure a list of super users. Access rules are configured for the KafkaUser , as described in the ACLRule schema reference . 21.1. superUsers A list of user principals treated as super users, so that they are always allowed without querying ACL rules. An example of simple authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 # ... Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration . 21.2. KafkaAuthorizationSimple schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationSimple type from KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value simple for the type KafkaAuthorizationSimple . Property Property type Description type string Must be simple . superUsers string array List of super users. Should contain list of user principals which should get unlimited access rights. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaAuthorizationSimple-reference |
Chapter 5. Machine phases and lifecycle | Chapter 5. Machine phases and lifecycle Machines move through a lifecycle that has several defined phases. Understanding the machine lifecycle and its phases can help you verify whether a procedure is complete or troubleshoot undesired behavior. In OpenShift Container Platform, the machine lifecycle is consistent across all supported cloud providers. 5.1. Machine phases As a machine moves through its lifecycle, it passes through different phases. Each phase is a basic representation of the state of the machine. Provisioning There is a request to provision a new machine. The machine does not yet exist and does not have an instance, a provider ID, or an address. Provisioned The machine exists and has a provider ID or an address. The cloud provider has created an instance for the machine. The machine has not yet become a node and the status.nodeRef section of the machine object is not yet populated. Running The machine exists and has a provider ID or address. Ignition has run successfully and the cluster machine approver has approved a certificate signing request (CSR). The machine has become a node and the status.nodeRef section of the machine object contains node details. Deleting There is a request to delete the machine. The machine object has a DeletionTimestamp field that indicates the time of the deletion request. Failed There is an unrecoverable problem with the machine. This can happen, for example, if the cloud provider deletes the instance for the machine. 5.2. The machine lifecycle The lifecycle begins with the request to provision a machine and continues until the machine no longer exists. The machine lifecycle proceeds in the following order. Interruptions due to errors or lifecycle hooks are not included in this overview. There is a request to provision a new machine for one of the following reasons: A cluster administrator scales a machine set such that it requires additional machines. An autoscaling policy scales machine set such that it requires additional machines. A machine that is managed by a machine set fails or is deleted and the machine set creates a replacement to maintain the required number of machines. The machine enters the Provisioning phase. The infrastructure provider creates an instance for the machine. The machine has a provider ID or address and enters the Provisioned phase. The Ignition configuration file is processed. The kubelet issues a certificate signing request (CSR). The cluster machine approver approves the CSR. The machine becomes a node and enters the Running phase. An existing machine is slated for deletion for one of the following reasons: A user with cluster-admin permissions uses the oc delete machine command. The machine gets a machine.openshift.io/delete-machine annotation. The machine set that manages the machine marks it for deletion to reduce the replica count as part of reconciliation. The cluster autoscaler identifies a node that is unnecessary to meet the deployment needs of the cluster. A machine health check is configured to replace an unhealthy machine. The machine enters the Deleting phase, in which it is marked for deletion but is still present in the API. The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. 5.3. Determining the phase of a machine You can find the phase of a machine by using the OpenShift CLI ( oc ) or by using the web console. You can use this information to verify whether a procedure is complete or to troubleshoot undesired behavior. 5.3.1. Determining the phase of a machine by using the CLI You can find the phase of a machine by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. Procedure List the machines on the cluster by running the following command: USD oc get machine -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m The PHASE column of the output contains the phase of each machine. 5.3.2. Determining the phase of a machine by using the web console You can find the phase of a machine by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in to the web console as a user with the cluster-admin role. Navigate to Compute Machines . On the Machines page, select the name of the machine that you want to find the phase of. On the Machine details page, select the YAML tab. In the YAML block, find the value of the status.phase field. Example YAML snippet apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t # ... status: phase: Running 1 1 In this example, the phase is Running . 5.4. Additional resources Lifecycle hooks for the machine deletion phase | [
"oc get machine -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t status: phase: Running 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_management/machine-phases-lifecycle |
Chapter 4. ImageStreamImage [image.openshift.io/v1] | Chapter 4. ImageStreamImage [image.openshift.io/v1] Description ImageStreamImage represents an Image that is retrieved by image name from an ImageStream. User interfaces and regular users can use this resource to access the metadata details of a tagged image in the image stream history for viewing, since Image resources are not directly accessible to end users. A not found error will be returned if no such image is referenced by a tag within the ImageStream. Images are created when spec tags are set on an image stream that represent an image in an external registry, when pushing to the integrated registry, or when tagging an existing image from one image stream to another. The name of an image stream image is in the form "<STREAM>@<DIGEST>", where the digest is the content addressible identifier for the image (sha256:xxxxx... ). You can use ImageStreamImages as the from.kind of an image stream spec tag to reference an image exactly. The only operations supported on the imagestreamimage endpoint are retrieving the image. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required image 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 4.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 4.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 4.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 4.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 4.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 4.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 4.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 4.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 4.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 4.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 4.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimages/{name} GET : read the specified ImageStreamImage 4.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimages/{name} Table 4.1. Global path parameters Parameter Type Description name string name of the ImageStreamImage HTTP method GET Description read the specified ImageStreamImage Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamImage schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/image_apis/imagestreamimage-image-openshift-io-v1 |
Red Hat OpenShift Software Certification Policy Guide | Red Hat OpenShift Software Certification Policy Guide Red Hat Software Certification 2025 For Use with Red Hat OpenShift Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openshift_software_certification_policy_guide/index |
Chapter 13. Installing a three-node cluster on Azure | Chapter 13. Installing a three-node cluster on Azure In OpenShift Container Platform version 4.15, you can install a three-node cluster on Microsoft Azure. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an Azure Marketplace image is not supported. 13.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on Azure using ARM templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 13.2. steps Installing a cluster on Azure with customizations Installing a cluster on Azure using ARM templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure/installing-azure-three-node |
18.8. IPv6 | 18.8. IPv6 The introduction of the -generation Internet Protocol, called IPv6, expands beyond the 32-bit address limit of IPv4 (or IP). IPv6 supports 128-bit addresses, and carrier networks that are IPv6 aware are therefore able to address a larger number of routable addresses than IPv4. Red Hat Enterprise Linux supports IPv6 firewall rules using the Netfilter 6 subsystem and the ip6tables command. In Red Hat Enterprise Linux 5, both IPv4 and IPv6 services are enabled by default. The ip6tables command syntax is identical to iptables in every aspect except that it supports 128-bit addresses. For example, use the following command to enable SSH connections on an IPv6-aware network server: For more information about IPv6 networking, refer to the IPv6 Information Page at http://www.ipv6.org/ . | [
"ip6tables -A INPUT -i eth0 -p tcp -s 3ffe:ffff:100::1/128 --dport 22 -j ACCEPT"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-firewall-ip6t |
Chapter 8. Tuning the Replication Performance | Chapter 8. Tuning the Replication Performance 8.1. Improving the Multi-supplier Replication Efficiency The replication latency in a multi-supplier replication environment, especially if the servers are connected using a wide area network (WAN), can be high in case of multiple suppliers are receiving updates at the same time. This happens when one suppliers exclusively accesses a replica without releasing it for a long time. In such situations, other suppliers cannot send updates to this consumer, which increases the replication latency. To release a replica after a fixed amount of time, set the nsds5ReplicaReleaseTimeout parameter on replication suppliers and hubs. Note The 60 seconds default value is ideal for most environments. A value set too high or too low can have a negative impact on the replication performance. If the value is set too low, replication servers are constantly reacquiring one another, and servers are not able to send many updates. In a high-traffic replication environment, a longer timeout can improve situations where one supplier exclusively accesses a replica. However, in most cases, a value higher than 120 seconds slows down replication. 8.1.1. Setting the Replication Release Timeout Using the Command Line To set the replication release timeout using the command line: Set the timeout value: This command sets the replication release timeout value for the dc=example,dc=com suffix to 70 seconds. Restart the Directory Server instance: 8.1.2. Setting the Replication Release Timeout Using the Web Console To set the replication release timeout using the Web Console: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. Open the Replication menu, and select Configuration . Click Show Advanced Settings . Set the timeout value in the Replication Release Timeout field. Click Save . Click the Actions button, and select Restart Instance . | [
"dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication set --suffix=\" dc=example,dc=com \" --repl-release-timeout= 70",
"dsctl instance_name restart"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning_replication_performance |
4.4. GLOBAL SETTINGS | 4.4. GLOBAL SETTINGS The GLOBAL SETTINGS panel is where the you define the networking details for the primary LVS router's public and private network interfaces. Figure 4.3. The GLOBAL SETTINGS Panel The top half of this panel sets up the primary LVS router's public and private network interfaces. These are the interfaces already configured in Section 3.1.1, "Configuring Network Interfaces for LVS with NAT" . Primary server public IP In this field, enter the publicly routable real IP address for the primary LVS node. Primary server private IP Enter the real IP address for an alternative network interface on the primary LVS node. This address is used solely as an alternative heartbeat channel for the backup router and does not have to correlate to the real private IP address assigned in Section 3.1.1, "Configuring Network Interfaces for LVS with NAT" . You may leave this field blank, but doing so will mean there is no alternate heartbeat channel for the backup LVS router to use and therefore will create a single point of failure. Note The private IP address is not needed for Direct Routing configurations, as all real servers as well as the LVS directors share the same virtual IP addresses and should have the same IP route configuration. Note The primary LVS router's private IP can be configured on any interface that accepts TCP/IP, whether it be an Ethernet adapter or a serial port. Use network type Click the NAT button to select NAT routing. Click the Direct Routing button to select direct routing. The three fields deal specifically with the NAT router's virtual network interface connecting the private network with the real servers. These fields do not apply to the direct routing network type. NAT Router IP Enter the private floating IP in this text field. This floating IP should be used as the gateway for the real servers. NAT Router netmask If the NAT router's floating IP needs a particular netmask, select it from drop-down list. NAT Router device Use this text field to define the device name of the network interface for the floating IP address, such as eth1:1 . Note You should alias the NAT floating IP address to the Ethernet interface connected to the private network. In this example, the private network is on the eth1 interface, so eth1:1 is the floating IP address. Warning After completing this page, click the ACCEPT button to make sure you do not lose any changes when selecting a new panel. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-piranha-globalset-VSA |
7.182. php | 7.182. php 7.182.1. RHSA-2013:0514 - php bug fix and enhancement update Updated php packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE link(s) associated with each description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fixes CVE-2011-1398 It was found that PHP did not check for carriage returns in HTTP headers, allowing intended HTTP response splitting protections to be bypassed. Depending on the web browser the victim is using, a remote attacker could use this flaw to perform a HTTP response splitting attacks. CVE-2012-2688 An integer signedness issue, leading to a heap-based buffer underflow, was found in the PHP scandir() function. If a remote attacker could upload an excessively large number of files to a directory the scandir() function runs on, it could cause the PHP interpreter to crash or, possibly, execute arbitrary code. CVE-2012-0831 It was found that PHP did not correctly handle the magic_quotes_gpc configuration directive. A remote attacker could use this flaw to disable the option, which may make it easier to perform SQL injection attacks. Bug Fixes BZ# 771738 Prior to this update, if a negative array index value was sent to the var_export() function, the function returned an unsigned index ID. With this update, the function has been modified to process negative array index values correctly. BZ# 812819 Previously, the setDate() , setISODate() and setTime() functions did not work correctly when the corresponding DateTime object was created from the timestamp. This bug has been fixed and the aforementioned functions now work properly. BZ# 824199 Previously, a segmentation fault occurred when PDOStatement was reused after failing due to the NOT NULL integrity constraint. This occurred when the pdo_mysql driver was in use. With this update, a patch has been introduced to fix this issue. BZ# 833545 Prior to this update, the dependency of the php-mbstring package on php-common packages was missing an architecture-specific requirement. Consequently, attempts to install or patch php-common failed on machines with php-mbstring installed. With this update, the architecture-specific requirement has been added and php-common can now be installed without complications. BZ#836264 Previously, the strcpy() function, called by the extract_sql_error_rec() function in the unixODBC API, overwrote a guard variable in the pdo_odbc_error() function. Consequently, a buffer overflow occurred. This bug has been fixed and the buffer overflow no longer occurs. BZ# 848186 , BZ# 868375 Under certain circumstances, the USDthis object became corrupted, and behaved as a non-object. A test with the is_object() function remained positive, but any attempt to access a member variable of USDthis resulted in the following warning: This behavior was caused by a bug in the Zend garbage collector . With this update, a patch has been introduced to fix garbage collection. As a result, USDthis no longer becomes corrupted. BZ# 858653 Previously, the Fileinfo extension did not use the stat interface from the stream wrapper. Consequently, when used with a stream object, the Fileinfo extension failed with the following message: With this update, the Fileinfo extension has been fixed to use the stream wrapper's stat interface. Note that only the file and phar stream wrappers support the stat interface in PHP 5.3.3. BZ#859371 When the DISABLE_AUTHENTICATOR parameter of the imap_open() function was specified as an array, it ignored the array input. Consequently, a GSSAPI warning was shown. This bug has been fixed and DISABLE_AUTHENTICATOR now processes the array input correctly. BZ#864951 Previously, a PHP script using the ODBC interfaces could enter a deadlock when the maximum execution time period expired while it was executing an SQL statement. This occurred because the execution timer used a signal and the invoked ODBC functions were not re-entered. With this update, the underlying code has been modified and the deadlock is now less likely to occur. Enhancements BZ# 806132 , BZ# 824293 This update adds the php-fpm package, which provides the FastCGI Process Manager. BZ# 837042 With this update, a php(language) virtual provide for specifying the PHP language version has been added to the php package. BZ# 874987 Previously, the php-xmlreader and php-xmlwriter modules were missing virtual provides. With this update, these virtual provides have been added. All users of php are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 7.182.2. RHSA-2013:1049 - Critical: php security update Updated php packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fix CVE-2013-4113 A buffer overflow flaw was found in the way PHP parsed deeply nested XML documents. If a PHP application used the xml_parse_into_struct() function to parse untrusted XML content, an attacker able to supply specially-crafted XML could use this flaw to crash the application or, possibly, execute arbitrary code with the privileges of the user running the PHP interpreter. All php users should upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing the updated packages, the httpd daemon must be restarted for the update to take effect. | [
"Notice: Trying to get property of non-object",
"file not found"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/php |
Chapter 6. Ingress Operator in OpenShift Container Platform | Chapter 6. Ingress Operator in OpenShift Container Platform 6.1. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 6.2. The Ingress configuration asset The installation program generates an asset with an Ingress resource in the config.openshift.io API group, cluster-ingress-02-config.yml . YAML Definition of the Ingress resource apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com The installation program stores this asset in the cluster-ingress-02-config.yml file in the manifests/ directory. This Ingress resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows: The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller. The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route resource that does not specify an explicit host. 6.3. Ingress Controller configuration parameters The ingresscontrollers.operator.openshift.io resource offers the following configuration parameters. Parameter Description domain domain is a DNS name serviced by the Ingress Controller and is used to configure multiple features: For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy . When using a generated default certificate, the certificate is valid for domain and its subdomains . See defaultCertificate . The value is published to individual Route statuses so that users know where to target external DNS records. The domain value must be unique among all Ingress Controllers and cannot be updated. If empty, the default value is ingress.config.openshift.io/cluster .spec.domain . replicas replicas is the desired number of Ingress Controller replicas. If not set, the default value is 2 . endpointPublishingStrategy endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. If not set, the default value is based on infrastructure.config.openshift.io/cluster .status.platform : Amazon Web Services (AWS): LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) Google Cloud Platform (GCP): LoadBalancerService (with External scope) Bare metal: NodePortService Other: HostNetwork Note HostNetwork has a hostNetwork field with the following default values for the optional binding ports: httpPort: 80 , httpsPort: 443 , and statsPort: 1936 . With the binding ports, you can deploy multiple Ingress Controllers on the same node for the HostNetwork strategy. Example apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936 Note On Red Hat OpenStack Platform (RHOSP), the LoadBalancerService endpoint publishing strategy is only supported if a cloud provider is configured to create health monitors. For RHOSP 16.1 and 16.2, this strategy is only possible if you use the Amphora Octavia provider. For more information, see the "Setting cloud provider options" section of the RHOSP installation documentation. For most platforms, the endpointPublishingStrategy value can be updated. On GCP, you can configure the following endpointPublishingStrategy fields: loadBalancer.scope loadbalancer.providerParameters.gcp.clientAccess hostNetwork.protocol nodePort.protocol defaultCertificate The defaultCertificate value is a reference to a secret that contains the default certificate that is served by the Ingress Controller. When Routes do not specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: * tls.crt : certificate file contents * tls.key : key file contents If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress Controller domain and subdomains , and the generated certificate's CA is automatically integrated with the cluster's trust store. The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Container Platform built-in OAuth server. namespaceSelector namespaceSelector is used to filter the set of namespaces serviced by the Ingress Controller. This is useful for implementing shards. routeSelector routeSelector is used to filter the set of Routes serviced by the Ingress Controller. This is useful for implementing shards. nodePlacement nodePlacement enables explicit control over the scheduling of the Ingress Controller. If not set, the defaults values are used. Note The nodePlacement parameter includes two parts, nodeSelector and tolerations . For example: nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists tlsSecurityProfile tlsSecurityProfile specifies settings for TLS connections for Ingress Controllers. If not set, the default value is based on the apiservers.config.openshift.io/cluster resource. When using the Old , Intermediate , and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z , an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the Ingress Controller, resulting in a rollout. The minimum TLS version for Ingress Controllers is 1.1 , and the maximum TLS version is 1.3 . Note Ciphers and the minimum TLS version of the configured security profile are reflected in the TLSProfile status. Important The Ingress Operator converts the TLS 1.0 of an Old or Custom profile to 1.1 . clientTLS clientTLS authenticates client access to the cluster and services; as a result, mutual TLS authentication is enabled. If not set, then client TLS is not enabled. clientTLS has the required subfields, spec.clientTLS.clientCertificatePolicy and spec.clientTLS.ClientCA . The ClientCertificatePolicy subfield accepts one of the two values: Required or Optional . The ClientCA subfield specifies a config map that is in the openshift-config namespace. The config map should contain a CA certificate bundle. The AllowedSubjectPatterns is an optional value that specifies a list of regular expressions, which are matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. At least one pattern must match a client certificate's distinguished name; otherwise, the Ingress Controller rejects the certificate and denies the connection. If not specified, the Ingress Controller does not reject certificates based on the distinguished name. routeAdmission routeAdmission defines a policy for handling new route claims, such as allowing or denying claims across namespaces. namespaceOwnership describes how hostname claims across namespaces should be handled. The default is Strict . Strict : does not allow routes to claim the same hostname across namespaces. InterNamespaceAllowed : allows routes to claim different paths of the same hostname across namespaces. wildcardPolicy describes how routes with wildcard policies are handled by the Ingress Controller. WildcardsAllowed : Indicates routes with any wildcard policy are admitted by the Ingress Controller. WildcardsDisallowed : Indicates only routes with a wildcard policy of None are admitted by the Ingress Controller. Updating wildcardPolicy from WildcardsAllowed to WildcardsDisallowed causes admitted routes with a wildcard policy of Subdomain to stop working. These routes must be recreated to a wildcard policy of None to be readmitted by the Ingress Controller. WildcardsDisallowed is the default setting. IngressControllerLogging logging defines parameters for what is logged where. If this field is empty, operational logs are enabled but access logs are disabled. access describes how client requests are logged. If this field is empty, access logging is disabled. destination describes a destination for log messages. type is the type of destination for logs: Container specifies that logs should go to a sidecar container. The Ingress Operator configures the container, named logs , on the Ingress Controller pod and configures the Ingress Controller to write logs to the container. The expectation is that the administrator configures a custom logging solution that reads logs from this container. Using container logs means that logs may be dropped if the rate of logs exceeds the container runtime capacity or the custom logging solution capacity. Syslog specifies that logs are sent to a Syslog endpoint. The administrator must specify an endpoint that can receive Syslog messages. The expectation is that the administrator has configured a custom Syslog instance. container describes parameters for the Container logging destination type. Currently there are no parameters for container logging, so this field must be empty. syslog describes parameters for the Syslog logging destination type: address is the IP address of the syslog endpoint that receives log messages. port is the UDP port number of the syslog endpoint that receives log messages. maxLength is the maximum length of the syslog message. It must be between 480 and 4096 bytes. If this field is empty, the maximum length is set to the default value of 1024 bytes. facility specifies the syslog facility of log messages. If this field is empty, the facility is local1 . Otherwise, it must specify a valid syslog facility: kern , user , mail , daemon , auth , syslog , lpr , news , uucp , cron , auth2 , ftp , ntp , audit , alert , cron2 , local0 , local1 , local2 , local3 . local4 , local5 , local6 , or local7 . httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation . httpHeaders httpHeaders defines the policy for HTTP headers. By setting the forwardedHeaderPolicy for the IngressControllerHTTPHeaders , you specify when and how the Ingress Controller sets the Forwarded , X-Forwarded-For , X-Forwarded-Host , X-Forwarded-Port , X-Forwarded-Proto , and X-Forwarded-Proto-Version HTTP headers. By default, the policy is set to Append . Append specifies that the Ingress Controller appends the headers, preserving any existing headers. Replace specifies that the Ingress Controller sets the headers, removing any existing headers. IfNone specifies that the Ingress Controller sets the headers if they are not already set. Never specifies that the Ingress Controller never sets the headers, preserving any existing headers. By setting headerNameCaseAdjustments , you can specify case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying X-Forwarded-For indicates that the x-forwarded-for HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. httpCompression httpCompression defines the policy for HTTP traffic compression. mimeTypes defines a list of MIME types to which compression should be applied. For example, text/css; charset=utf-8 , text/html , text/* , image/svg+xml , application/octet-stream , X-custom/customsub , using the format pattern, type/subtype; [;attribute=value] . The types are: application, image, message, multipart, text, video, or a custom type prefaced by X- ; e.g. To see the full notation for MIME types and subtypes, see RFC1341 httpErrorCodePages httpErrorCodePages specifies custom HTTP error code response pages. By default, an IngressController uses error pages built into the IngressController image. httpCaptureCookies httpCaptureCookies specifies HTTP cookies that you want to capture in access logs. If the httpCaptureCookies field is empty, the access logs do not capture the cookies. For any cookie that you want to capture, the following parameters must be in your IngressController configuration: name specifies the name of the cookie. maxLength specifies tha maximum length of the cookie. matchType specifies if the field name of the cookie exactly matches the capture cookie setting or is a prefix of the capture cookie setting. The matchType field uses the Exact and Prefix parameters. For example: httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE httpCaptureHeaders httpCaptureHeaders specifies the HTTP headers that you want to capture in the access logs. If the httpCaptureHeaders field is empty, the access logs do not capture the headers. httpCaptureHeaders contains two lists of headers to capture in the access logs. The two lists of header fields are request and response . In both lists, the name field must specify the header name and the maxlength field must specify the maximum length of the header. For example: httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length tuningOptions tuningOptions specifies options for tuning the performance of Ingress Controller pods. clientFinTimeout specifies how long a connection is held open while waiting for the client response to the server closing the connection. The default timeout is 1s . clientTimeout specifies how long a connection is held open while waiting for a client response. The default timeout is 30s . headerBufferBytes specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least 16384 if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is 32768 bytes. Setting this field not recommended because headerBufferBytes values that are too small can break the Ingress Controller, and headerBufferBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. headerBufferMaxRewriteBytes specifies how much memory should be reserved, in bytes, from headerBufferBytes for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for headerBufferMaxRewriteBytes is 4096 . headerBufferBytes must be greater than headerBufferMaxRewriteBytes for incoming HTTP requests. If not set, the default value is 8192 bytes. Setting this field not recommended because headerBufferMaxRewriteBytes values that are too small can break the Ingress Controller and headerBufferMaxRewriteBytes values that are too large could cause the Ingress Controller to use significantly more memory than necessary. healthCheckInterval specifies how long the router waits between health checks. The default is 5s . serverFinTimeout specifies how long a connection is held open while waiting for the server response to the client that is closing the connection. The default timeout is 1s . serverTimeout specifies how long a connection is held open while waiting for a server response. The default timeout is 30s . threadCount specifies the number of threads to create per HAProxy process. Creating more threads allows each Ingress Controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to 64 threads. If this field is empty, the Ingress Controller uses the default value of 4 threads. The default value can change in future releases. Setting this field is not recommended because increasing the number of HAProxy threads allows Ingress Controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. Reducing the number of threads can cause the Ingress Controller to perform poorly. tlsInspectDelay specifies how long the router can hold data to find a matching route. Setting this value too short can cause the router to fall back to the default certificate for edge-terminated, reencrypted, or passthrough routes, even when using a better matched certificate. The default inspect delay is 5s . tunnelTimeout specifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. The default timeout is 1h . maxConnections specifies the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod handle more connections at the cost of additional system resources. Permitted values are 0 , -1 , any value within the range 2000 and 2000000 , or the field can be left empty. If this field is left empty or has the value 0 , the ingress controller will use the default value of 20000 . This value is subject to change in future releases. If the field has the value of -1 , then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. This process results in a large computed value that will incur significant memory usage compared to the current default value of 20000 . If the field has a value that is greater than the current operating system limit, the HAProxy process will not start. If you choose a discrete value and the router pod is migrated to a new node, it is possible the new node does not have an identical ulimit configured. In such cases, the pod fails to start. If you have nodes with different ulimits configured, and you choose a discrete value, it is recommended to use the value of -1 for this field so that the maximum number of connections is calculated at runtime. logEmptyRequests logEmptyRequests specifies connections for which no request is received and logged. These empty requests come from load balancer health probes or web browser speculative connections (preconnect) and logging these requests can be undesirable. However, these requests can be caused by network errors, in which case logging empty requests can be useful for diagnosing the errors. These requests can be caused by port scans, and logging empty requests can aid in detecting intrusion attempts. Allowed values for this field are Log and Ignore . The default value is Log . The LoggingPolicy type accepts either one of two values: Log : Setting this value to Log indicates that an event should be logged. Ignore : Setting this value to Ignore sets the dontlognull option in the HAproxy configuration. HTTPEmptyRequestsPolicy HTTPEmptyRequestsPolicy describes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are Respond and Ignore . The default value is Respond . The HTTPEmptyRequestsPolicy type accepts either one of two values: Respond : If the field is set to Respond , the Ingress Controller sends an HTTP 400 or 408 response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics. Ignore : Setting this option to Ignore adds the http-ignore-probes parameter in the HAproxy configuration. If the field is set to Ignore , the Ingress Controller closes the connection without sending a response, then logs the connection, or incrementing metrics. These connections come from load balancer health probes or web browser speculative connections (preconnect) and can be safely ignored. However, these requests can be caused by network errors, so setting this field to Ignore can impede detection and diagnosis of problems. These requests can be caused by port scans, in which case logging empty requests can aid in detecting intrusion attempts. Note All parameters are optional. 6.3.1. Ingress Controller TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a connecting client can use when connecting to the server. 6.3.1.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.3.1.2. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 6.3.1.3. Configuring mutual TLS authentication You can configure the Ingress Controller to enable mutual TLS (mTLS) authentication by setting a spec.clientTLS value. The clientTLS value configures the Ingress Controller to verify client certificates. This configuration includes setting a clientCA value, which is a reference to a config map. The config map contains the PEM-encoded CA certificate bundle that is used to verify a client's certificate. Optionally, you can also configure a list of certificate subject filters. If the clientCA value specifies an X509v3 certificate revocation list (CRL) distribution point, the Ingress Operator downloads and manages a CRL config map based on the HTTP URI X509v3 CRL Distribution Point specified in each provided certificate. The Ingress Controller uses this config map during mTLS/TLS negotiation. Requests that do not provide valid certificates are rejected. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a PEM-encoded CA certificate bundle. If your CA bundle references a CRL distribution point, you must have also included the end-entity or leaf certificate to the client CA bundle. This certificate must have included an HTTP URI under CRL Distribution Points , as described in RFC 5280. For example: Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl Procedure In the openshift-config namespace, create a config map from your CA bundle: USD oc create configmap \ router-ca-certs-default \ --from-file=ca-bundle.pem=client-ca.crt \ 1 -n openshift-config 1 The config map data key must be ca-bundle.pem , and the data value must be a CA certificate in PEM format. Edit the IngressController resource in the openshift-ingress-operator project: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.clientTLS field and subfields to configure mutual TLS: Sample IngressController CR for a clientTLS profile that specifies filtering patterns apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - "^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD" 6.4. View the default Ingress Controller The Ingress Operator is a core feature of OpenShift Container Platform and is enabled out of the box. Every new OpenShift Container Platform installation has an ingresscontroller named default. It can be supplemented with additional Ingress Controllers. If the default ingresscontroller is deleted, the Ingress Operator will automatically recreate it within a minute. Procedure View the default Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/default 6.5. View Ingress Operator status You can view and inspect the status of your Ingress Operator. Procedure View your Ingress Operator status: USD oc describe clusteroperators/ingress 6.6. View Ingress Controller logs You can view your Ingress Controller logs. Procedure View your Ingress Controller logs: USD oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name> 6.7. View Ingress Controller status Your can view the status of a particular Ingress Controller. Procedure View the status of an Ingress Controller: USD oc describe --namespace=openshift-ingress-operator ingresscontroller/<name> 6.8. Configuring the Ingress Controller 6.8.1. Setting a custom default certificate As an administrator, you can configure an Ingress Controller to use a custom certificate by creating a Secret resource and editing the IngressController custom resource (CR). Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI. Your certificate meets the following requirements: The certificate is valid for the ingress domain. The certificate uses the subjectAltName extension to specify a wildcard domain, such as *.apps.ocp4.example.com . You must have an IngressController CR. You may use the default one: USD oc --namespace openshift-ingress-operator get ingresscontrollers Example output NAME AGE default 10m Note If you have intermediate certificates, they must be included in the tls.crt file of the secret containing a custom default certificate. Order matters when specifying a certificate; list your intermediate certificate(s) after any server certificate(s). Procedure The following assumes that the custom certificate and key pair are in the tls.crt and tls.key files in the current working directory. Substitute the actual path names for tls.crt and tls.key . You also may substitute another name for custom-certs-default when creating the Secret resource and referencing it in the IngressController CR. Note This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy. Create a Secret resource containing the custom certificate in the openshift-ingress namespace using the tls.crt and tls.key files. USD oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key Update the IngressController CR to reference the new certificate secret: USD oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \ --patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}' Verify the update was effective: USD echo Q |\ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null |\ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM Tip You can alternatively apply the following YAML to set a custom default certificate: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default The certificate secret name should match the value used to update the CR. Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller's deployment to use the custom certificate. 6.8.2. Removing a custom default certificate As an administrator, you can remove a custom certificate that you configured an Ingress Controller to use. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You previously configured a custom default certificate for the Ingress Controller. Procedure To remove the custom certificate and restore the certificate that ships with OpenShift Container Platform, enter the following command: USD oc patch -n openshift-ingress-operator ingresscontrollers/default \ --type json -p USD'- op: remove\n path: /spec/defaultCertificate' There can be a delay while the cluster reconciles the new certificate configuration. Verification To confirm that the original cluster certificate is restored, enter the following command: USD echo Q | \ openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | \ openssl x509 -noout -subject -issuer -enddate where: <domain> Specifies the base domain name for your cluster. Example output subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT 6.8.3. Scaling an Ingress Controller Manually scale an Ingress Controller to meeting routing performance or availability requirements such as the requirement to increase throughput. oc commands are used to scale the IngressController resource. The following procedure provides an example for scaling up the default IngressController . Note Scaling is not an immediate action, as it takes time to create the desired number of replicas. Procedure View the current number of available replicas for the default IngressController : USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 2 Scale the default IngressController to the desired number of replicas using the oc patch command. The following example scales the default IngressController to 3 replicas: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge Example output ingresscontroller.operator.openshift.io/default patched Verify that the default IngressController scaled to the number of replicas that you specified: USD oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}' Example output 3 Tip You can alternatively apply the following YAML to scale an Ingress Controller to three replicas: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1 1 If you need a different amount of replicas, change the replicas value. 6.8.4. Configuring Ingress access logging You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Container Platform, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs. Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller. Syslog is needed for high-traffic clusters where access logs could exceed the OpenShift Logging stack's capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap. Prerequisites Log in as a user with cluster-admin privileges. Procedure Configure Ingress access logging to a sidecar. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a sidecar container, you must specify Container spec.logging.access.destination.type . The following example is an Ingress Controller definition that logs to a Container destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs inside the Ingress Controller Pod: USD oc -n openshift-ingress logs deployment.apps/router-default -c logs Example output 2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1" Configure Ingress access logging to a Syslog endpoint. To configure Ingress access logging, you must specify a destination using spec.logging.access.destination . To specify logging to a Syslog endpoint destination, you must specify Syslog for spec.logging.access.destination.type . If the destination type is Syslog , you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint and you can specify a facility using spec.logging.access.destination.syslog.facility . The following example is an Ingress Controller definition that logs to a Syslog destination: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 Note The syslog destination port must be UDP. Configure Ingress access logging with a specific log format. You can specify spec.logging.access.httpLogFormat to customize the log format. The following example is an Ingress Controller definition that logs to a syslog endpoint with IP address 1.2.3.4 and port 10514: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV' Disable Ingress access logging. To disable Ingress access logging, leave spec.logging or spec.logging.access empty: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null 6.8.5. Setting Ingress Controller thread count A cluster administrator can set the thread count to increase the amount of incoming connections a cluster can handle. You can patch an existing Ingress Controller to increase the amount of threads. Prerequisites The following assumes that you already created an Ingress Controller. Procedure Update the Ingress Controller to increase the number of threads: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"threadCount": 8}}}' Note If you have a node that is capable of running large amounts of resources, you can configure spec.nodePlacement.nodeSelector with labels that match the capacity of the intended node, and configure spec.tuningOptions.threadCount to an appropriately high value. 6.8.6. Configuring an Ingress Controller to use an internal load balancer When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Figure 6.1. Diagram of LoadBalancer The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress LoadBalancerService endpoint publishing strategy: You can load balance externally, using the cloud provider load balancer, or internally, using the OpenShift Ingress Controller Load Balancer. You can use the single IP address of the load balancer and more familiar ports, such as 8080 and 4200 as shown on the cluster depicted in the graphic. Traffic from the external load balancer is directed at the pods, and managed by the load balancer, as depicted in the instance of a down node. See the Kubernetes Services documentation for implementation details. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IngressController custom resource (CR) in a file named <name>-ingress-controller.yaml , such as in the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3 1 Replace <name> with a name for the IngressController object. 2 Specify the domain for the application published by the controller. 3 Specify a value of Internal to use an internal load balancer. Create the Ingress Controller defined in the step by running the following command: USD oc create -f <name>-ingress-controller.yaml 1 1 Replace <name> with the name of the IngressController object. Optional: Confirm that the Ingress Controller was created by running the following command: USD oc --all-namespaces=true get ingresscontrollers 6.8.7. Configuring global access for an Ingress Controller on GCP An Ingress Controller created on GCP with an internal load balancer generates an internal IP address for the service. A cluster administrator can specify the global access option, which enables clients in any region within the same VPC network and compute region as the load balancer, to reach the workloads running on your cluster. For more information, see the GCP documentation for global access . Prerequisites You deployed an OpenShift Container Platform cluster on GCP infrastructure. You configured an Ingress Controller to use an internal load balancer. You installed the OpenShift CLI ( oc ). Procedure Configure the Ingress Controller resource to allow global access. Note You can also create an Ingress Controller and specify the global access option. Configure the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Edit the YAML file: Sample clientAccess configuration to Global spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService 1 Set gcp.clientAccess to Global . Save the file to apply the changes. Run the following command to verify that the service allows global access: USD oc -n openshift-ingress edit svc/router-default -o yaml The output shows that global access is enabled for GCP with the annotation, networking.gke.io/internal-load-balancer-allow-global-access . 6.8.8. Setting the Ingress Controller health check interval A cluster administrator can set the health check interval to define how long the router waits between two consecutive health checks. This value is applied globally as a default for all routes. The default value is 5 seconds. Prerequisites The following assumes that you already created an Ingress Controller. Procedure Update the Ingress Controller to change the interval between back end health checks: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"healthCheckInterval": "8s"}}}' Note To override the healthCheckInterval for a single route, use the route annotation router.openshift.io/haproxy.health.check.interval 6.8.9. Configuring the default Ingress Controller for your cluster to be internal You can configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. Warning If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. Important If you want to change the scope for an IngressController , you can change the .spec.endpointPublishingStrategy.loadBalancer.scope parameter after the custom resource (CR) is created. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Configure the default Ingress Controller for your cluster to be internal by deleting and recreating it. USD oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF 6.8.10. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 6.8.11. Using wildcard routes The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy to configure the ROUTER_ALLOW_WILDCARD_ROUTES environment variable of the Ingress Controller. The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None , which is backwards compatible with existing IngressController resources. Procedure Configure the wildcard policy. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the wildcardPolicy field to WildcardsDisallowed or WildcardsAllowed : spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed 6.8.12. Using X-Forwarded headers You configure the HAProxy Ingress Controller to specify a policy for how to handle HTTP headers including Forwarded and X-Forwarded-For . The Ingress Operator uses the HTTPHeaders field to configure the ROUTER_SET_FORWARDED_HEADERS environment variable of the Ingress Controller. Procedure Configure the HTTPHeaders field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit IngressController Under spec , set the HTTPHeaders policy field to Append , Replace , IfNone , or Never : apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append Example use cases As a cluster administrator, you can: Configure an external proxy that injects the X-Forwarded-For header into each request before forwarding it to an Ingress Controller. To configure the Ingress Controller to pass the header through unmodified, you specify the never policy. The Ingress Controller then never sets the headers, and applications receive only the headers that the external proxy provides. Configure the Ingress Controller to pass the X-Forwarded-For header that your external proxy sets on external cluster requests through unmodified. To configure the Ingress Controller to set the X-Forwarded-For header on internal cluster requests, which do not go through the external proxy, specify the if-none policy. If an HTTP request already has the header set through the external proxy, then the Ingress Controller preserves it. If the header is absent because the request did not come through the proxy, then the Ingress Controller adds the header. As an application developer, you can: Configure an application-specific external proxy that injects the X-Forwarded-For header. To configure an Ingress Controller to pass the header through unmodified for an application's Route, without affecting the policy for other Routes, add an annotation haproxy.router.openshift.io/set-forwarded-headers: if-none or haproxy.router.openshift.io/set-forwarded-headers: never on the Route for the application. Note You can set the haproxy.router.openshift.io/set-forwarded-headers annotation on a per route basis, independent from the globally set value for the Ingress Controller. 6.8.13. Enabling HTTP/2 Ingress connectivity You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more. You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster. To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate. The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes. Warning Using WebSockets with a re-encrypt route and with HTTP/2 enabled on an Ingress Controller requires WebSocket support over HTTP/2. WebSockets over HTTP/2 is a feature of HAProxy 2.4, which is unsupported in OpenShift Container Platform at this time. Important For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. Procedure Enable HTTP/2 on a single Ingress Controller. To enable HTTP/2 on an Ingress Controller, enter the oc annotate command: USD oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true Replace <ingresscontroller_name> with the name of the Ingress Controller to annotate. Enable HTTP/2 on the entire cluster. To enable HTTP/2 for the entire cluster, enter the oc annotate command: USD oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: "true" 6.8.14. Configuring the PROXY protocol for an Ingress Controller A cluster administrator can configure the PROXY protocol when an Ingress Controller uses either the HostNetwork or NodePortService endpoint publishing strategy types. The PROXY protocol enables the load balancer to preserve the original client addresses for connections that the Ingress Controller receives. The original client addresses are useful for logging, filtering, and injecting HTTP headers. In the default configuration, the connections that the Ingress Controller receives only contain the source address that is associated with the load balancer. This feature is not supported in cloud deployments. This restriction is because when OpenShift Container Platform runs in a cloud platform, and an IngressController specifies that a service load balancer should be used, the Ingress Operator configures the load balancer service and enables the PROXY protocol based on the platform requirement for preserving source addresses. Important You must configure both OpenShift Container Platform and the external load balancer to either use the PROXY protocol or to use TCP. Warning The PROXY protocol is unsupported for the default Ingress Controller with installer-provisioned clusters on non-cloud platforms that use a Keepalived Ingress VIP. Prerequisites You created an Ingress Controller. Procedure Edit the Ingress Controller resource: USD oc -n openshift-ingress-operator edit ingresscontroller/default Set the PROXY configuration: If your Ingress Controller uses the hostNetwork endpoint publishing strategy type, set the spec.endpointPublishingStrategy.hostNetwork.protocol subfield to PROXY : Sample hostNetwork configuration to PROXY spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork If your Ingress Controller uses the NodePortService endpoint publishing strategy type, set the spec.endpointPublishingStrategy.nodePort.protocol subfield to PROXY : Sample nodePort configuration to PROXY spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService 6.8.15. Specifying an alternative cluster domain using the appsDomain option As a cluster administrator, you can specify an alternative to the default cluster domain for user-created routes by configuring the appsDomain field. The appsDomain field is an optional domain for OpenShift Container Platform to use instead of the default, which is specified in the domain field. If you specify an alternative domain, it overrides the default cluster domain for the purpose of determining the default host for a new route. For example, you can use the DNS domain for your company as the default domain for routes and ingresses for applications running on your cluster. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc command line interface. Procedure Configure the appsDomain field by specifying an alternative default domain for user-created routes. Edit the ingress cluster resource: USD oc edit ingresses.config/cluster -o yaml Edit the YAML file: Sample appsDomain configuration to test.example.com apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2 1 Specifies the default domain. You cannot modify the default domain after installation. 2 Optional: Domain for OpenShift Container Platform infrastructure to use for application routes. Instead of the default prefix, apps , you can use an alternative prefix like test . Verify that an existing route contains the domain name specified in the appsDomain field by exposing the route and verifying the route domain change: Note Wait for the openshift-apiserver finish rolling updates before exposing the route. Expose the route: USD oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed Example output: USD oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None 6.8.16. Converting HTTP header case HAProxy 2.2 lowercases HTTP header names by default, for example, changing Host: xyz.com to host: xyz.com . If legacy applications are sensitive to the capitalization of HTTP header names, use the Ingress Controller spec.httpHeaders.headerNameCaseAdjustments API field for a solution to accommodate legacy applications until they can be fixed. Important Because OpenShift Container Platform includes HAProxy 2.2, make sure to add the necessary configuration by using spec.httpHeaders.headerNameCaseAdjustments before upgrading. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure As a cluster administrator, you can convert the HTTP header case by entering the oc patch command or by setting the HeaderNameCaseAdjustments field in the Ingress Controller YAML file. Specify an HTTP header to be capitalized by entering the oc patch command. Enter the oc patch command to change the HTTP host header to Host : USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"httpHeaders":{"headerNameCaseAdjustments":["Host"]}}}' Annotate the route of the application: USD oc annotate routes/my-application haproxy.router.openshift.io/h1-adjust-case=true The Ingress Controller then adjusts the host request header as specified. Specify adjustments using the HeaderNameCaseAdjustments field by configuring the Ingress Controller YAML file. The following example Ingress Controller YAML adjusts the host header to Host for HTTP/1 requests to appropriately annotated routes: Example Ingress Controller YAML apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host The following example route enables HTTP response header name case adjustments using the haproxy.router.openshift.io/h1-adjust-case annotation: Example route YAML apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application 1 Set haproxy.router.openshift.io/h1-adjust-case to true. 6.8.17. Using router compression You configure the HAProxy Ingress Controller to specify router compression globally for specific MIME types. You can use the mimeTypes variable to define the formats of MIME types to which compression is applied. The types are: application, image, message, multipart, text, video, or a custom type prefaced by "X-". To see the full notation for MIME types and subtypes, see RFC1341 . Note Memory allocated for compression can affect the max connections. Additionally, compression of large buffers can cause latency, like heavy regex or long lists of regex. Not all MIME types benefit from compression, but HAProxy still uses resources to try to compress if instructed to. Generally, text formats, such as html, css, and js, formats benefit from compression, but formats that are already compressed, such as image, audio, and video, benefit little in exchange for the time and resources spent on compression. Procedure Configure the httpCompression field for the Ingress Controller. Use the following command to edit the IngressController resource: USD oc edit -n openshift-ingress-operator ingresscontrollers/default Under spec , set the httpCompression policy field to mimeTypes and specify a list of MIME types that should have compression applied: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - "text/html" - "text/css; charset=utf-8" - "application/json" ... 6.8.18. Exposing router metrics You can expose the HAProxy router metrics by default in Prometheus format on the default stats port, 1936. The external metrics collection and aggregation systems such as Prometheus can access the HAProxy router metrics. You can view the HAProxy router metrics in a browser in the HTML and comma separated values (CSV) format. Prerequisites You configured your firewall to access the default stats port, 1936. Procedure Get the router pod name by running the following command: USD oc get pods -n openshift-ingress Example output NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h Get the router's username and password, which the router pod stores in the /var/lib/haproxy/conf/metrics-auth/statsUsername and /var/lib/haproxy/conf/metrics-auth/statsPassword files: Get the username by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsUsername Get the password by running the following command: USD oc rsh <router_pod_name> cat metrics-auth/statsPassword Get the router IP and metrics certificates by running the following command: USD oc describe pod <router_pod> Get the raw statistics in Prometheus format by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Access the metrics securely by running the following command: USD curl -u user:password https://<router_IP>:<stats_port>/metrics -k Access the default stats port, 1936, by running the following command: USD curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics Example 6.1. Example output ... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ... Launch the stats window by entering the following URL in a browser: http://<user>:<password>@<router_IP>:<stats_port> Optional: Get the stats in CSV format by entering the following URL in a browser: http://<user>:<password>@<router_ip>:1936/metrics;csv 6.8.19. Customizing HAProxy error code response pages As a cluster administrator, you can specify a custom error code response page for either 503, 404, or both error pages. The HAProxy router serves a 503 error page when the application pod is not running or a 404 error page when the requested URL does not exist. For example, if you customize the 503 error code response page, then the page is served when the application pod is not running, and the default 404 error code HTTP response page is served by the HAProxy router for an incorrect route or a non-existing route. Custom error code response pages are specified in a config map then patched to the Ingress Controller. The config map keys have two available file names as follows: error-page-503.http and error-page-404.http . Custom HTTP error code response pages must follow the HAProxy HTTP error page configuration guidelines . Here is an example of the default OpenShift Container Platform HAProxy router http 503 error code response page . You can use the default content as a template for creating your own custom page. By default, the HAProxy router serves only a 503 error page when the application is not running or when the route is incorrect or non-existent. This default behavior is the same as the behavior on OpenShift Container Platform 4.8 and earlier. If a config map for the customization of an HTTP error code response is not provided, and you are using a custom HTTP error code response page, the router serves a default 404 or 503 error code response page. Note If you use the OpenShift Container Platform default 503 error code page as a template for your customizations, the headers in the file require an editor that can use CRLF line endings. Procedure Create a config map named my-custom-error-code-pages in the openshift-config namespace: USD oc -n openshift-config create configmap my-custom-error-code-pages \ --from-file=error-page-503.http \ --from-file=error-page-404.http Important If you do not specify the correct format for the custom error code response page, a router pod outage occurs. To resolve this outage, you must delete or correct the config map and delete the affected router pods so they can be recreated with the correct information. Patch the Ingress Controller to reference the my-custom-error-code-pages config map by name: USD oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"httpErrorCodePages":{"name":"my-custom-error-code-pages"}}}' --type=merge The Ingress Operator copies the my-custom-error-code-pages config map from the openshift-config namespace to the openshift-ingress namespace. The Operator names the config map according to the pattern, <your_ingresscontroller_name>-errorpages , in the openshift-ingress namespace. Display the copy: USD oc get cm default-errorpages -n openshift-ingress Example output 1 The example config map name is default-errorpages because the default Ingress Controller custom resource (CR) was patched. Confirm that the config map containing the custom error response page mounts on the router volume where the config map key is the filename that has the custom HTTP error code response: For 503 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http For 404 custom HTTP custom error code response: USD oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http Verification Verify your custom error code HTTP response: Create a test project and application: USD oc new-project test-ingress USD oc new-app django-psql-example For 503 custom http error code response: Stop all the pods for the application. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> For 404 custom http error code response: Visit a non-existent route or an incorrect route. Run the following curl command or visit the route hostname in the browser: USD curl -vk <route_hostname> Check if the errorfile attribute is properly in the haproxy.config file: USD oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile 6.8.20. Setting the Ingress Controller maximum connections A cluster administrator can set the maximum number of simultaneous connections for OpenShift router deployments. You can patch an existing Ingress Controller to increase the maximum number of connections. Prerequisites The following assumes that you already created an Ingress Controller Procedure Update the Ingress Controller to change the maximum number of connections for HAProxy: USD oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{"spec":{"tuningOptions": {"maxConnections": 7500}}}' Warning If you set the spec.tuningOptions.maxConnections value greater than the current operating system limit, the HAProxy process will not start. See the table in the "Ingress Controller configuration parameters" section for more information about this parameter. 6.9. Additional resources Configuring a custom PKI | [
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936",
"nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists",
"httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE",
"httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl",
"oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt \\ 1 -n openshift-config",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/default",
"oc describe clusteroperators/ingress",
"oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>",
"oc --namespace openshift-ingress-operator get ingresscontrollers",
"NAME AGE default 10m",
"oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key",
"oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default",
"oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"2",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge",
"ingresscontroller.operator.openshift.io/default patched",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container",
"oc -n openshift-ingress logs deployment.apps/router-default -c logs",
"2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3",
"oc create -f <name>-ingress-controller.yaml 1",
"oc --all-namespaces=true get ingresscontrollers",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService",
"oc -n openshift-ingress edit svc/router-default -o yaml",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"healthCheckInterval\": \"8s\"}}}'",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"oc edit IngressController",
"spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed",
"oc edit IngressController",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork",
"spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService",
"oc edit ingresses.config/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2",
"oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'",
"oc annotate routes/my-application haproxy.router.openshift.io/h1-adjust-case=true",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application",
"oc edit -n openshift-ingress-operator ingresscontrollers/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - \"text/html\" - \"text/css; charset=utf-8\" - \"application/json\"",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h",
"oc rsh <router_pod_name> cat metrics-auth/statsUsername",
"oc rsh <router_pod_name> cat metrics-auth/statsPassword",
"oc describe pod <router_pod>",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"curl -u user:password https://<router_IP>:<stats_port>/metrics -k",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"http://<user>:<password>@<router_IP>:<stats_port>",
"http://<user>:<password>@<router_ip>:1936/metrics;csv",
"oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge",
"oc get cm default-errorpages -n openshift-ingress",
"NAME DATA AGE default-errorpages 2 25s 1",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http",
"oc new-project test-ingress",
"oc new-app django-psql-example",
"curl -vk <route_hostname>",
"curl -vk <route_hostname>",
"oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"maxConnections\": 7500}}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/configuring-ingress |
Chapter 6. Device Drivers | Chapter 6. Device Drivers 6.1. New drivers Network drivers MT7921E 802.11ax wireless driver (mt7921e.ko.xz) Realtek 802.11ax wireless core module (rtw89_core.ko.xz) Realtek 802.11ax wireless PCI driver (rtw89_pci.ko.xz) ntb_netdev (ntb_netdev.ko.xz) Intel(R) Ethernet Protocol Driver for RDMA (irdma.ko.xz) Intel(R) PCI-E Non-Transparent Bridge Driver (ntb_hw_intel.ko.xz) Graphics drivers and miscellaneous drivers Generic Counter interface (counter.ko.xz) Intel Quadrature Encoder Peripheral driver (intel-qep.ko.xz) AMD (R) PCIe MP2 Communication Driver (amd_sfh.ko.xz) Driver to initialize some steering wheel joysticks from Thrustmaster (hid-thrustmaster.ko.xz) HID over I2C ACPI driver (i2c-hid-acpi.ko.xz) Intel PMC Core Driver (intel_pmc_core.ko.xz) ThinkLMI Driver (think-lmi.ko.xz) Processor Thermal Reporting Device Driver (int3401_thermal.ko.xz) Processor Thermal Reporting Device Driver (processor_thermal_device_pci.ko.xz) Processor Thermal Reporting Device Driver (processor_thermal_device_pci_legacy.ko.xz) TI TPS6598x USB Power Delivery Controller Driver (tps6598x.ko.xz) 6.2. Updated drivers Network drivers Intel(R) PRO/1000 Network Driver (e1000e.ko.xz) has been updated. Intel(R) Ethernet Switch Host Interface Driver (fm10k.ko.xz) has been updated. Intel(R) Ethernet Connection XL710 Network Driver (i40e.ko.xz) has been updated. Intel(R) Ethernet Adaptive Virtual Function Network Driver (iavf.ko.xz) has been updated. Intel(R) Gigabit Ethernet Network Driver (igb.ko.xz) has been updated. Intel(R) Gigabit Virtual Function Network Driver (igbvf.ko.xz) has been updated. Intel(R) 2.5G Ethernet Linux Driver (igc.ko.xz) has been updated. Intel(R) 10 Gigabit PCI Express Network Driver (ixgbe.ko.xz) has been updated. Intel(R) 10 Gigabit Virtual Function Network Driver (ixgbevf.ko.xz) has been updated. Mellanox 5th generation network adapters (ConnectX series) core driver (mlx5_core.ko.xz) has been updated. VMware vmxnet3 virtual NIC driver (vmxnet3.ko.xz) has been updated to version 1.6.0.0-k. Storage drivers Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:14.0.0.4. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.719.03.00-rh1. LSI MPT Fusion SAS 3.0 Device Driver (mpt3sas.ko.xz) has been updated to version 39.100.00.00. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.02.06.200-k. Driver for Microchip Smart Family Controller (smartpqi.ko.xz) has been updated to version 2.1.12-055. Graphics and miscellaneous driver updates Standalone drm driver for the VMware SVGA device (vmwgfx.ko.xz) has been updated to version 2.18.1.0. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/device_drivers |
Chapter 12. Working with Large Messages | Chapter 12. Working with Large Messages JBoss EAP messaging supports large messages, even when the client or server has limited amounts of memory. Large messages can be streamed as they are, or they can be compressed further for more efficient transferral. A user can send a large message by setting an InputStream in the body of the message. When the message is sent, JBoss EAP messaging reads this InputStream and transmits data to the server in fragments. Neither the client nor the server stores the complete body of a large message in memory. The consumer initially receives a large message with an empty body and thereafter sets an OutputStream on the message to stream it in fragments to a disk file. Warning When processing large messages, the server does not handle message properties in the same way as the message body. For example a message with a property set to a string that is bigger than journal-buffer-size cannot be processed by the server because it overfills the journal buffer. 12.1. Streaming Large Messages If you send large messages the standard way, the heap size required to send them can be four or more times the size of the message, meaning a 1 GB message can require 4 GB in heap memory. For this reason, JBoss EAP messaging supports setting the body of messages using the java.io.InputStream and java.io.OutputStream classes, which require much less memory. Input streams are used directly for sending messages and output streams are used for receiving messages. When receiving messages, there are two ways to deal with the output stream: You can block while the output stream is recovered using the ClientMessage.saveToOutputStream(OutputStream out) method. You can use the ClientMessage.setOutputstream(OutputStream out) method to asynchronously write the message to the stream. This method requires that the consumer be kept alive until the message has been fully received. You can use any kind of stream you like, for example files, JDBC Blobs, or SocketInputStream, as long as it implements java.io.InputStream for sending messages and java.io.OutputStream for receiving messages. Streaming Large Messages Using the Core API The following table shows the methods available on the ClientMessage class that are available through Jakarta Messaging by using object properties. ClientMessage Method Description Jakarta Messaging Equivalent Property setBodyInputStream(InputStream) Set the InputStream used to read a message body when it is sent. JMS_AMQ_InputStream setOutputStream(OutputStream) Set the OutputStream that will receive the body of a message. This method does not block. JMS_AMQ_OutputStream saveOutputStream(OutputStream) Save the body of the message to the OutputStream . It will block until the entire content is transferred to the OutputStream . JMS_AMQ_SaveStream The following code example sets the output stream when receiving a core message. ClientMessage firstMessage = consumer.receive(...); // Block until the stream is transferred firstMessage.saveOutputStream(firstOutputStream); ClientMessage secondMessage = consumer.receive(...); // Do not wait for the transfer to finish secondMessage.setOutputStream(secondOutputStream); The following code example sets the input stream when sending a core message: ClientMessage clientMessage = session.createMessage(); clientMessage.setInputStream(dataInputStream); Note For messages larger than 2GiB, you must use the _AMQ_LARGE_SIZE message property. This is because the getBodySize() method will return an invalid value because it is limited to the maximum integer value. Streaming Large Messages Over Jakarta Messaging When using Jakarta Messaging, JBoss EAP messaging maps the core API streaming methods by setting object properties. You use the Message.setObjectProperty(String name, Object value) method to set the input and output streams. The InputStream is set using the JMS_AMQ_InputStream property on messages being sent. BytesMessage bytesMessage = session.createBytesMessage(); FileInputStream fileInputStream = new FileInputStream(fileInput); BufferedInputStream bufferedInput = new BufferedInputStream(fileInputStream); bytesMessage.setObjectProperty("JMS_AMQ_InputStream", bufferedInput); someProducer.send(bytesMessage); The OutputStream is set using the JMS_AMQ_SaveStream property on messages being received in a blocking manner. BytesMessage messageReceived = (BytesMessage) messageConsumer.receive(120000); File outputFile = new File("huge_message_received.dat"); FileOutputStream fileOutputStream = new FileOutputStream(outputFile); BufferedOutputStream bufferedOutput = new BufferedOutputStream(fileOutputStream); // This will block until the entire content is saved on disk messageReceived.setObjectProperty("JMS_AMQ_SaveStream", bufferedOutput); The OutputStream can also be set in a non-blocking manner by using the JMS_AMQ_OutputStream property. // This does not wait for the stream to finish. You must keep the consumer active. messageReceived.setObjectProperty("JMS_AMQ_OutputStream", bufferedOutput); Note When streaming large messages using Jakarta Messaging, only StreamMessage and BytesMessage objects are supported. 12.2. Configuring Large Messages 12.2.1. Configure Large Message Location You can read the configuration for the large messages directory by using the management CLI command below. The output is also included to highlight default configuration. Important To achieve the best performance, it is recommended to store the large messages directory on a different physical volume from the message journal or the paging directory. The large-messages-directory configuration element is used to specify a location on the filesystem to store the large messages. Note that by default the path is activemq/largemessages . You can change the location for path by using the following management CLI command. Also note the relative-to attribute in the output above. When relative-to is used, the value of the path attribute is treated as relative to the file path specified by relative-to . By default this value is the JBoss EAP property jboss.server.data.dir . For standalone servers, jboss.server.data.dir is located at EAP_HOME /standalone/data . For domains, each server will have its own serverX/data/activemq directory located under EAP_HOME /domain/servers . You can change the value of relative-to using the following management CLI command. Configuring Large Message Size Use the management CLI to view the current configuration for large messages. Note that the this configuration is part of a connection-factory element. For example, to read the current configuration for the default RemoteConnectionFactory that is included, use the following command: Set the attribute using a similar syntax. Note The value of the attribute min-large-message-size should be in bytes. Configuring Large Message Compression You can choose to compress large messages for fast and efficient transfer. All compression/decompression operations are handled on the client side. If the compressed message is smaller than min-large-message size , it is sent to the server as a regular message. Compress large messages by setting the boolean property compress-large-messages to true using the management CLI. 12.2.2. Configuring Large Message Size Using the Core API If you are using the core API on the client side, you need to use the setMinLargeMessageSize method to specify the minimum size of large messages. The minimum size of large messages ( min-large-message-size ) is set to 100KB by default. ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA(new TransportConfiguration(InVMConnectorFactory.class.getName())) locator.setMinLargeMessageSize(25 * 1024); ClientSessionFactory factory = ActiveMQClient.createClientSessionFactory(); | [
"ClientMessage firstMessage = consumer.receive(...); // Block until the stream is transferred firstMessage.saveOutputStream(firstOutputStream); ClientMessage secondMessage = consumer.receive(...); // Do not wait for the transfer to finish secondMessage.setOutputStream(secondOutputStream);",
"ClientMessage clientMessage = session.createMessage(); clientMessage.setInputStream(dataInputStream);",
"BytesMessage bytesMessage = session.createBytesMessage(); FileInputStream fileInputStream = new FileInputStream(fileInput); BufferedInputStream bufferedInput = new BufferedInputStream(fileInputStream); bytesMessage.setObjectProperty(\"JMS_AMQ_InputStream\", bufferedInput); someProducer.send(bytesMessage);",
"BytesMessage messageReceived = (BytesMessage) messageConsumer.receive(120000); File outputFile = new File(\"huge_message_received.dat\"); FileOutputStream fileOutputStream = new FileOutputStream(outputFile); BufferedOutputStream bufferedOutput = new BufferedOutputStream(fileOutputStream); // This will block until the entire content is saved on disk messageReceived.setObjectProperty(\"JMS_AMQ_SaveStream\", bufferedOutput);",
"// This does not wait for the stream to finish. You must keep the consumer active. messageReceived.setObjectProperty(\"JMS_AMQ_OutputStream\", bufferedOutput);",
"/subsystem=messaging-activemq/server=default/path=large-messages-directory:read-resource { \"outcome\" => \"success\", \"result\" => { \"path\" => \"activemq/largemessages\", \"relative-to\" => \"jboss.server.data.dir\" } }",
"/subsystem=messaging-activemq/server=default/path=large-messages-directory:write-attribute(name=path,value= PATH_LOCATION )",
"/subsystem=messaging-activemq/server=default/path=large-messages-directory:write-attribute(name=relative-to,value= RELATIVE_LOCATION )",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:read-attribute(name=min-large-message-size)",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=min-large-message-size,value= NEW_MIN_SIZE )",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=compress-large-messages,value=true)",
"ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA(new TransportConfiguration(InVMConnectorFactory.class.getName())) locator.setMinLargeMessageSize(25 * 1024); ClientSessionFactory factory = ActiveMQClient.createClientSessionFactory();"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/work_with_large_messages |
7.3. Caching | 7.3. Caching Caching options can be configured with virt-manager during guest installation, or on an existing guest virtual machine by editing the guest XML configuration. Table 7.1. Caching options Caching Option Description Cache=none I/O from the guest is not cached on the host, but may be kept in a writeback disk cache. Use this option for guests with large I/O requirements. This option is generally the best choice, and is the only option to support migration. Cache=writethrough I/O from the guest is cached on the host but written through to the physical medium. This mode is slower and prone to scaling problems. Best used for small number of guests with lower I/O requirements. Suggested for guests that do not support a writeback cache (such as Red Hat Enterprise Linux 5.5 and earlier), where migration is not needed. Cache=writeback I/O from the guest is cached on the host. To configure the cache mode in the guest XML, use virsh edit to edit the cache setting inside the driver tag, specifying none , writeback , or writethrough . For example, to set the cache as writeback : <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> | [
"<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-Caching |
2.6. Verifying the Integrity of Back-end Databases | 2.6. Verifying the Integrity of Back-end Databases The dsctl dbverify command enables administrators to verify the integrity of back-end databases. For example, to verify the userroot database: Optionally, list the back-end databases of the instance: You need the name of the database in a later step. Stop the Directory Server instance: Verify the database: If the verification process reported any problems, fix them manually or restore a backup. Start the Directory Server instance: | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list dc=example,dc=com ( userroot )",
"dsctl instance_name stop",
"dsctl instance_name dbverify userroot [04/Feb/2020:13:11:02.453624171 +0100] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [04/Feb/2020:13:11:02.465339507 +0100] - WARN - ldbm_instance_add_instance_entry_callback - ldbm instance userroot already exists [04/Feb/2020:13:11:02.468060144 +0100] - ERR - ldbm_config_read_instance_entries - Failed to add instance entry cn=userroot,cn=ldbm database,cn=plugins,cn=config [04/Feb/2020:13:11:02.471079045 +0100] - ERR - bdb_config_load_dse_info - failed to read instance entries [04/Feb/2020:13:11:02.476173304 +0100] - ERR - libdb - BDB0522 Page 0: metadata page corrupted [04/Feb/2020:13:11:02.481684604 +0100] - ERR - libdb - BDB0523 Page 0: could not check metadata page [04/Feb/2020:13:11:02.484113053 +0100] - ERR - libdb - /var/lib/dirsrv/slapd-instance_name/db/userroot/entryrdn.db: BDB0090 DB_VERIFY_BAD: Database verification failed [04/Feb/2020:13:11:02.486449603 +0100] - ERR - dbverify_ext - verify failed(-30970): /var/lib/dirsrv/slapd-instance_name/db/userroot/entryrdn.db dbverify failed",
"dsctl instance_name start"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/verifying-the-integrity-of-database-files |
Chapter 1. Monitoring APIs | Chapter 1. Monitoring APIs 1.1. Alertmanager [monitoring.coreos.com/v1] Description Alertmanager describes an Alertmanager cluster. Type object 1.2. AlertmanagerConfig [monitoring.coreos.com/v1beta1] Description AlertmanagerConfig configures the Prometheus Alertmanager, specifying how alerts should be grouped, inhibited and notified to external systems. Type object 1.3. AlertRelabelConfig [monitoring.openshift.io/v1] Description AlertRelabelConfig defines a set of relabel configs for alerts. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. AlertingRule [monitoring.openshift.io/v1] Description AlertingRule represents a set of user-defined Prometheus rule groups containing alerting rules. This resource is the supported method for cluster admins to create alerts based on metrics recorded by the platform monitoring stack in OpenShift, i.e. the Prometheus instance deployed to the openshift-monitoring namespace. You might use this to create custom alerting rules not shipped with OpenShift based on metrics from components such as the node_exporter, which provides machine-level metrics such as CPU usage, or kube-state-metrics, which provides metrics on Kubernetes usage. The API is mostly compatible with the upstream PrometheusRule type from the prometheus-operator. The primary difference being that recording rules are not allowed here - only alerting rules. For each AlertingRule resource created, a corresponding PrometheusRule will be created in the openshift-monitoring namespace. OpenShift requires admins to use the AlertingRule resource rather than the upstream type in order to allow better OpenShift specific defaulting and validation, while not modifying the upstream APIs directly. You can find upstream API documentation for PrometheusRule resources here: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. PodMonitor [monitoring.coreos.com/v1] Description PodMonitor defines monitoring for a set of pods. Type object 1.6. Probe [monitoring.coreos.com/v1] Description Probe defines monitoring for a set of static targets or ingresses. Type object 1.7. Prometheus [monitoring.coreos.com/v1] Description Prometheus defines a Prometheus deployment. Type object 1.8. PrometheusRule [monitoring.coreos.com/v1] Description PrometheusRule defines recording and alerting rules for a Prometheus instance Type object 1.9. ServiceMonitor [monitoring.coreos.com/v1] Description ServiceMonitor defines monitoring for a set of services. Type object 1.10. ThanosRuler [monitoring.coreos.com/v1] Description ThanosRuler defines a ThanosRuler deployment. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/monitoring-apis |
1.10.3. Security | 1.10.3. Security As stated earlier in this chapter, security cannot be an afterthought, and security under Red Hat Enterprise Linux is more than skin-deep. Authentication and access controls are deeply-integrated into the operating system and are based on designs gleaned from long experience in the UNIX community. For authentication, Red Hat Enterprise Linux uses PAM -- Pluggable Authentication Modules. PAM makes it possible to fine-tune user authentication via the configuration of shared libraries that all PAM-aware applications use, all without requiring any changes to the applications themselves. Access control under Red Hat Enterprise Linux uses traditional UNIX-style permissions (read, write, execute) against user, group, and "everyone else" classifications. Like UNIX, Red Hat Enterprise Linux also makes use of setuid and setgid bits to temporarily confer expanded access rights to processes running a particular program, based on the ownership of the program file. Of course, this makes it critical that any program to be run with setuid or setgid privileges must be carefully audited to ensure that no exploitable vulnerabilities exist. Red Hat Enterprise Linux also includes support for access control lists . An access control list (ACL) is a construct that allows extremely fine-grained control over what users or groups may access a file or directory. For example, a file's permissions may restrict all access by anyone other than the file's owner, yet the file's ACL can be configured to allow only user bob to write and group finance to read the file. Another aspect of security is being able to keep track of system activity. Red Hat Enterprise Linux makes extensive use of logging, both at a kernel and an application level. Logging is controlled by the system logging daemon syslogd , which can log system information locally (normally to files in the /var/log/ directory) or to a remote system (which acts as a dedicated log server for multiple computers.) Intrusion detection sytems (IDS) are powerful tools for any Red Hat Enterprise Linux system administrator. An IDS makes it possible for system administrators to determine whether unauthorized changes were made to one or more systems. The overall design of the operating system itself includes IDS-like functionality. Because Red Hat Enterprise Linux is installed using the RPM Package Manager (RPM), it is possible to use RPM to verify whether any changes have been made to the packages comprising the operating system. However, because RPM is primarily a package management tool, its abilities as an IDS are somewhat limited. Even so, it can be a good first step toward monitoring a Red Hat Enterprise Linux system for unauthorized modifications. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-philosophy-rhlspec-security |
Chapter 26. Virtualization | Chapter 26. Virtualization Nested virtualization As a Technology Preview, Red Hat Enterprise Linux 7.2 offers nested virtualization. This feature enables KVM to launch guests that can act as hypervisors and create their own guests. The virt-p2v tool Red Hat Enterprise Linux 7.2 offers the virt-p2v tool as a Technology Preview. virt-p2v (physical to virtual) is a CD-ROM, ISO or PXE image that the user can boot on a physical machine, and that creates a KVM virtual machine with disk contents identical to the physical machine. USB 3.0 support for KVM guests USB 3.0 host adapter (xHCI) emulation for KVM guests remains a Technology Preview in Red Hat Enterprise Linux 7.2. VirtIO-1 support Virtio drivers have been updated to Kernel 4.1 to provide VirtIO 1.0 Device Support. Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. However, OVMF is not bootable with virtualization components available in RHEL 7. Note that OVMF is fully supported in RHEL 8. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/technology-preview-virtualization |
6.4. Virtualization | 6.4. Virtualization qemu-kvm component, BZ# 1159613 If a virtio device is created where the number of vectors is set to a value higher than 32, the device behaves as if it was set to a zero value on Red Hat Enterprise Linux 6, but not on Enterprise Linux 7. The resulting vector setting mismatch causes a migration error if the number of vectors on any virtio device on either platform is set to 33 or higher. It is, therefore, not recommended to set the vector value to be greater than 32. virtio-win component When upgrading the NetKVM driver through the Windows Device Manager, the old registry values are not removed. As a consequence, for example, non-existent parameters may be available. qemu-kvm component When working with very large images (larger than 2TB) created with very small cluster sizes (for example, 512bytes), block I/O errors can occur due to timeouts in qemu. To prevent this problem from occurring, use the default cluster size of 64KiB or larger. kernel component On Microsoft Windows Server 2012 containing large dynamic VHDX (Hyper-V virtual hard disk) files and using the ext3 file system, a call trace can appear, and, consequently, it is not possible to shut down the guest. To work around this problem, use the ext4 file system or set a logical block size of 1MB when creating a VHDX file. Note that this can only be done by using Microsoft PowerShell as the Hyper-V manager does not expose the -BlockSizeBytes option which has the default value of 32MB. To create a dynamix VHDX file with an approximate size of 2.5TB and 1MB block size run: libvirt component The storage drivers do not support the virsh vol-resize command options --allocate and --shrink . Use of the --shrink option will result in the following error message: Use of the --allocate option will result in the following error message: Shrinking a volume's capacity is possible as long as the value provided on the command line is greater than the volume allocation value as seen with the virsh vol-info command. You can shrink an existing volume by name through the followind sequence of steps: Dump the XML of the larger volume into a file using the vol-dumpxml . Edit the file to change the name, path, and capacity values, where the capacity must be greater than or equal to the allocation. Create a temporary smaller volume using the vol-create with the edited XML file. Back up and restore the larger volumes data using the vol-download and vol-upload commands to the smaller volume. Use the vol-delete command to remove the larger volume. Use the vol-clone command to restore the name from the larger volume. Use the vol-delete command to remove the temporary volume. In order to allocate more space on the volume, follow a similar sequence, but adjust the allocation to a larger value than the existing volume. virtio-win component It is not possible to downgrade a driver using the Search for the best driver in these locations option because the newer and installed driver will be selected as the "best" driver. If you want to force installation of a particular driver version, use the Don't search option and the Have Disk button to select the folder of the older driver. This method will allow you to install an older driver on a system that already has a driver installed. kernel component There is a known issue with the Microsoft Hyper-V host. If a legacy network interface controller (NIC) is used on a multiple-CPU virtual machine, there is an interrupt problem in the emulated hardware when the IRQ balancing daemon is running. Call trace information is logged in the /var/log/messages file. libvirt component, BZ# 888635 Under certain circumstances, virtual machines try to boot from an incorrect device after a network boot failure. For more information, please refer to this article on Customer Portal. numad component, BZ# 872524 If numad is run on a system with a task that has very large resident memory (>= 50% total system memory), then the numad-initiated NUMA page migrations for that task can cause swapping. The swapping can then induce long latencies for the system. An example is running a 256GB Microsoft Windows KVM Virtual Machine on a 512GB host. The Windows guest will fault in all pages on boot in order to zero them. On a four node system, numad will detect that a 256GB task can fit in a subset of two or three nodes, and then attempt to migrate it to that subset. Swapping can then occur and lead to latencies. These latencies may then cause the Windows guest to hang, as timing requirements are no longer met. Therefore, on a system with only one or two very large Windows machines, it is recommended to disable numad . Note that this problem is specific to Windows 2012 guests that use more memory than exists in a single node. Windows 2012 guests appear to allocate memory more gradually than other Windows guest types, which triggers the issue. Other varieties of Windows guests do not seem to experience this problem. You can work around this problem by: limiting Windows 2012 guests to less memory than exists in a given node -- so on a typical 4 node system with even memory distribution, the guest would need to be less than the total amount of system memory divided by 4; or allowing the Windows 2012 guests to finish allocating all of its memory before allowing numad to run. numad will handle extremely huge Windows 2012 guests correctly after allowing a few minutes for the guest to finish allocating all of its memory. grubby component, BZ# 893390 When a Red Hat Enterprise Linux 6.4 guest updates the kernel and then the guest is turned off through Microsoft Hyper-V Manager, the guest fails to boot due to incomplete grub information. This is because the data is not synced properly to disk when the machine is turned off through Hyper-V Manager. To work around this problem, execute the sync command before turning the guest off. kernel component Using the mouse scroll wheel does not work on Red Hat Enterprise Linux 6.4 guests that run under certain version of Microsoft Hyper-V Manager. However, the scroll wheel works as expected when the vncviewer utility is used. kernel component, BZ# 874406 Microsoft Windows Server 2012 guests using the e1000 driver can become unresponsive consuming 100% CPU during boot or reboot. kernel component When a kernel panic is triggered on a Microsoft Hyper-V guest, the kdump utility does not capture the kernel error information; an error is only displayed on the command line. This is a host problem. Guest kdump works as expected on Microsoft Hyper-V 2012 R2 host. quemu-kvm component, BZ# 871265 AMD Opteron G1, G2 or G3 CPU models on qemu-kvm use the family and models values as follows: family=15 and model=6. If these values are larger than 20, the lahfm_lm CPU feature is ignored by Linux guests, even when the feature is enabled. To work around this problem, use a different CPU model, for example AMD Opteron G4. qemu-kvm component, BZ# 860929 KVM guests must not be allowed to update the host CPU microcode. KVM does not allow this, and instead always returns the same microcode revision or patch level value to the guest. If the guest tries to update the CPU microcode, it will fail and show an error message similar to: To work around this, configure the guest to not install CPU microcode updates; for example, uninstall the microcode_ctl package Red Hat Enterprise Linux of Fedora guests. virt-p2v component, BZ# 816930 Converting a physical server running either Red Hat Enterprise Linux 4 or Red Hat Enterprise Linux 5 which has its file system root on an MD device is not supported. Converting such a guest results in a guest which fails to boot. Note that conversion of a Red Hat Enterprise Linux 6 server which has its root on an MD device is supported. virt-p2v component, BZ# 808820 When converting a physical host with a multipath storage, Virt-P2V presents all available paths for conversion. Only a single path must be selected. This must be a currently active path. virtio-win component, BZ# 615928 The balloon service on Windows 7 guests can only be started by the Administrator user. libvirt component, BZ# 622649 libvirt uses transient iptables rules for managing NAT or bridging to virtual machine guests. Any external command that reloads the iptables state (such as running system-config-firewall ) will overwrite the entries needed by libvirt . Consequently, after running any command or tool that changes the state of iptables , guests may lose access to the network. To work around this issue, use the service libvirt reload command to restore libvirt 's additional iptables rules. virtio-win component, BZ# 612801 A Windows virtual machine must be restarted after the installation of the kernel Windows driver framework. If the virtual machine is not restarted, it may crash when a memory balloon operation is performed. qemu-kvm component, BZ# 720597 Installation of Windows 7 Ultimate x86 (32-bit) Service Pack 1 on a guest with more than 4GB of RAM and more than one CPU from a DVD medium can lead to the system being unresponsive and, consequently, to a crash during the final steps of the installation process. To work around this issue, use the Windows Update utility to install the Service Pack. qemu-kvm component, BZ# 612788 A dual function Intel 82576 Gigabit Ethernet Controller interface (codename: Kawela, PCI Vendor/Device ID: 8086:10c9) cannot have both physical functions (PF's) device-assigned to a Windows 2008 guest. Either physical function can be device assigned to a Windows 2008 guest (PCI function 0 or function 1), but not both. virt-v2v component, BZ# 618091 The virt-v2v utility is able to convert guests running on an ESX server. However, if an ESX guest has a disk with a snapshot, the snapshot must be on the same datastore as the underlying disk storage. If the snapshot and the underlying storage are on different datastores, virt-v2v will report a 404 error while trying to retrieve the storage. virt-v2v component, BZ# 678232 The VMware Tools application on Microsoft Windows is unable to disable itself when it detects that it is no longer running on a VMware platform. Consequently, converting a Microsoft Windows guest from VMware ESX, which has VMware Tools installed, will result in errors. These errors usually manifest as error messages on start-up, and a "Stop Error" (also known as a BSOD) when shutting down the guest. To work around this issue, uninstall VMware Tools on Microsoft Windows guests prior to conversion. libguestfs component The libguestfs packages do not support remote access to disks over the network in Red Hat Enterprise Linux 6. Consequently, the virt-sysprep tool as well as other tools do not work with remote disks. Users who need to access disks remotely with tools such as virt-sysprep are advised to upgrade to Red Hat Enterprise Linux 7. | [
"New-VHD -Path .\\MyDisk.vhdx -SizeBytes 5120MB -BlockSizeBytes 1MB -Dynamic",
"error: invalid argument: storageVolumeResize: unsupported flags (0x4)",
"error: invalid argument: storageVolumeResize: unsupported flags (0x1)",
"CPU0: update failed (for patch_level=0x6000624)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/virtualization_issues |
Chapter 19. Glossary | Chapter 19. Glossary This glossary defines common terms that are used in the logging documentation. Annotation You can use annotations to attach metadata to objects. Red Hat OpenShift Logging Operator The Red Hat OpenShift Logging Operator provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs. Custom resource (CR) A CR is an extension of the Kubernetes API. To configure the logging and log forwarding, you can customize the ClusterLogging and the ClusterLogForwarder custom resources. Event router The event router is a pod that watches OpenShift Container Platform events. It collects logs by using the logging. Fluentd Fluentd is a log collector that resides on each OpenShift Container Platform node. It gathers application, infrastructure, and audit logs and forwards them to different outputs. Garbage collection Garbage collection is the process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Elasticsearch Elasticsearch is a distributed search and analytics engine. OpenShift Container Platform uses Elasticsearch as a default log store for the logging. OpenShift Elasticsearch Operator The OpenShift Elasticsearch Operator is used to run an Elasticsearch cluster on OpenShift Container Platform. The OpenShift Elasticsearch Operator provides self-service for the Elasticsearch cluster operations and is used by the logging. Indexing Indexing is a data structure technique that is used to quickly locate and access data. Indexing optimizes the performance by minimizing the amount of disk access required when a query is processed. JSON logging The Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either the logging managed Elasticsearch or any other third-party system supported by the Log Forwarding API. Kibana Kibana is a browser-based console interface to query, discover, and visualize your Elasticsearch data through histograms, line graphs, and pie charts. Kubernetes API server Kubernetes API server validates and configures data for the API objects. Labels Labels are key-value pairs that you can use to organize and select subsets of objects, such as a pod. Logging With the logging, you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store. Logging collector A logging collector collects logs from the cluster, formats them, and forwards them to the log store or third party systems. Log store A log store is used to store aggregated logs. You can use an internal log store or forward logs to external log stores. Log visualizer Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics. Node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. Operators Operators are the preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. Pod A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node. Role-based access control (RBAC) RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles. Shards Elasticsearch organizes log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards. Taint Taints ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. Toleration You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. Web console A user interface (UI) to manage OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/logging/openshift-logging-common-terms |
Appendix B. Sharing reports with non-administrators | Appendix B. Sharing reports with non-administrators Users without administrator privileges can view collected logs and metrics as read-only users. The following example creates a user named user name with view (read-only) permissions. Procedure Log in to the Metrics Store virtual machine. Create a new user: Log in to the openshift-logging project: Assign a view role to the user: Create a password for the user: | [
"oc create user username # oc create identity allow_all: username # oc create useridentitymapping allow_all: username username",
"oc project openshift-logging",
"oc adm policy add-role-to-user view user name",
"oc login --username= user name --password= password"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/adding_read_only_kibana_user |
Chapter 21. Supportability and Maintenance | Chapter 21. Supportability and Maintenance ABRT 2.1 Red Hat Enterprise Linux 7 includes the Automatic Bug Reporting Tool ( ABRT ) 2.1, which features an improved user interface and the ability to send mReports , lightweight anonymous problem reports suitable for machine processing, such as gathering crash statistics. The set of supported languages, for which ABRT is capable of detecting problems, has been extended with the addition of Java and Ruby in ABRT 2.1. In order to use ABRT , ensure that the abrt-desktop or the abrt-cli package is installed on your system. The abrt-desktop package provides a graphical user interface for ABRT , and the abrt-cli package contains a tool for using ABRT on the command line. You can also install both. To install the package containing the graphical user interface for ABRT , run the following command as the root user: To install the package that provides the command line ABRT tool, use the following command: Note that while both of the above commands cause the main ABRT system to be installed, you may need to install additional packages to obtain support for detecting crashes in software programmed using various languages. See the Automatic Bug Reporting Tool (ABRT) chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for information on additional packages available with the ABRT system. Upon installation, the abrtd daemon, which is the core of the ABRT crash-detecting service, is configured to start at boot time. You can use the following command to verify its current status: In order to discover as many software bugs as possible, administrators should configure ABRT to automatically send reports of application crashes to Red Hat. To enable the autoreporting feature, issue the following command as root : Additional Information on ABRT Red Hat Enterprise Linux 7 System Administrator's Guide - The Automatic Bug Reporting Tool (ABRT) chapter of the Administrator's Guide for Red Hat Enterprise Linux 7 contains detailed information on installing, configuring, and using the ABRT service. | [
"~]# yum install abrt-desktop",
"~]# yum install abrt-cli",
"~]USD systemctl is-active abrtd.service active",
"~]# abrt-auto-reporting enabled"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-supportability_and_maintenance |
Appendix B. Red Hat OpenStack Platform for POWER | Appendix B. Red Hat OpenStack Platform for POWER In a new Red Hat OpenStack Platform installation, you can deploy overcloud Compute nodes on POWER (ppc64le) hardware. For the Compute node cluster, you can use the same architecture, or use a combination of x86_64 and ppc64le systems. The undercloud, Controller nodes, Ceph Storage nodes, and all other systems are supported only on x86_64 hardware. B.1. Ceph Storage When you configure access to external Ceph in a multi-architecture cloud, set the CephAnsiblePlaybook parameter to /usr/share/ceph-ansible/site.yml.sample and include your client key and other Ceph-specific parameters. For example: B.2. Composable services The following services typically form part of the Controller node and are available for use in custom roles as Technology Preview: Block Storage service (cinder) Image service (glance) Identity service (keystone) Networking service (neutron) Object Storage service (swift) Note Red Hat does not support features in Technology Preview. For more information about composable services, see composable services and custom roles in the Advanced Overcloud Customization guide. Use the following example to understand how to move the listed services from the Controller node to a dedicated ppc64le node: | [
"parameter_defaults: CephAnsiblePlaybook: /usr/share/ceph-ansible/site.yml.sample CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ== CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 CephExternalMonHost: 172.16.1.7, 172.16.1.8",
"(undercloud) [stack@director ~]USD rsync -a /usr/share/openstack-tripleo-heat-templates/. ~/templates (undercloud) [stack@director ~]USD cd ~/templates/roles (undercloud) [stack@director roles]USD cat <<EO_TEMPLATE >ControllerPPC64LE.yaml ############################################################################### Role: ControllerPPC64LE # ############################################################################### - name: ControllerPPC64LE description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: - primary - controller networks: - External - InternalApi - Storage - StorageMgmt - Tenant # For systems with both IPv4 and IPv6, you may specify a gateway network for # each, such as ['ControlPlane', 'External'] default_route_networks: ['External'] HostnameFormatDefault: '%stackname%-controllerppc64le-%index%' ImageDefault: ppc64le-overcloud-full ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackendDellPs - OS::TripleO::Services::CinderBackendDellSc - OS::TripleO::Services::CinderBackendDellEMCUnity - OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI - OS::TripleO::Services::CinderBackendDellEMCVNX - OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI - OS::TripleO::Services::CinderBackendNetApp - OS::TripleO::Services::CinderBackendScaleIO - OS::TripleO::Services::CinderBackendVRTSHyperScale - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderHPELeftHandISCSI - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Collectd - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GlanceRegistry - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronApi - OS::TripleO::Services::NeutronBgpVpnApi - OS::TripleO::Services::NeutronSfcApi - OS::TripleO::Services::NeutronCorePlugin - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronL2gwAgent - OS::TripleO::Services::NeutronL2gwApi - OS::TripleO::Services::NeutronL3Agent - OS::TripleO::Services::NeutronLbaasv2Agent - OS::TripleO::Services::NeutronLbaasv2Api - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronML2FujitsuCfab - OS::TripleO::Services::NeutronML2FujitsuFossw - OS::TripleO::Services::NeutronOvsAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::Ntp - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Rhsm - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::SkydiveAgent - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftProxy - OS::TripleO::Services::SwiftDispersion - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftStorage - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent - OS::TripleO::Services::Ptp EO_TEMPLATE (undercloud) [stack@director roles]USD sed -i~ -e '/OS::TripleO::Services::\\(Cinder\\|Glance\\|Swift\\|Keystone\\|Neutron\\)/d' Controller.yaml (undercloud) [stack@director roles]USD cd ../ (undercloud) [stack@director templates]USD openstack overcloud roles generate --roles-path roles -o roles_data.yaml Controller Compute ComputePPC64LE ControllerPPC64LE BlockStorage ObjectStorage CephStorage"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/appe-OSP_on_POWER |
4.2. Creating an Image Builder blueprint in the web console interface | 4.2. Creating an Image Builder blueprint in the web console interface To describe the customized system image, create a blueprint first. Prerequisites You have opened the Image Builder interface of the RHEL 7 web console in a browser. Procedure 1. Click Create Blueprint in the top right corner. Figure 4.3. Creating a Blueprint A pop-up appears with fields for the blueprint name and description . 2. Fill in the name of the blueprint, its description, then click Create. The screen changes to blueprint editing mode . 3. Add components that you want to include in the system image: i. On the left, enter all or part of the component name in the Available Components field and press Enter. Figure 4.4. Searching for Available Componentes The search is added to the list of filters under the text entry field, and the list of components below is reduced to these that match the search. If the list of components is too long, add further search terms in the same way. ii. The list of components is paged. To move to other result pages, use the arrows and entry field above the component list. iii. Click on name of the component you intend to use to display its details. The right pane fills with details of the components, such as its version and dependencies. iv. Select the version you want to use in the Component Options box, with the Version Release dropdown. v. Click Add in the top left. vi. If you added a component by mistake, remove it by clicking the ... button at the far right of its entry in the right pane, and select Remove in the menu. Note If you do not intend to select version for some components, you can skip the component details screen and version selection by clicking the + buttons on the right side of the component list. 4. To save the blueprint, click Commit in the top right. A dialog with a summary of the changes pops up. Click Commit . A small pop-up on the right informs you of the saving progress and then the result. 5. To exit the editing screen, click Back to Blueprints in the top left. The Image Builder view opens, listing existing blueprints. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter4-section_2 |
Chapter 1. Red Hat OpenStack Platform high availability overview and planning | Chapter 1. Red Hat OpenStack Platform high availability overview and planning Red Hat OpenStack Platform (RHOSP) high availability (HA) is a collection of services that orchestrate failover and recovery for your deployment. When you plan your HA deployment, ensure that you review the considerations for different aspects of the environment, such as hardware assignments and network configuration. 1.1. Red Hat OpenStack Platform high availability services Red Hat OpenStack Platform (RHOSP) employs several technologies to provide the services required to implement high availability (HA). These services include Galera, RabbitMQ, Redis, HAProxy, individual services that Pacemaker manages, and Systemd and plain container services that Podman manages. 1.1.1. Service types Core container Core container services are Galera, RabbitMQ, Redis, and HAProxy. These services run on all Controller nodes and require specific management and constraints for the start, stop and restart actions. You use Pacemaker to launch, manage, and troubleshoot core container services. Note RHOSP uses the MariaDB Galera Cluster to manage database replication. Active-passive Active-passive services run on one Controller node at a time, and include services such as openstack-cinder-volume . To move an active-passive service, you must use Pacemaker to ensure that the correct stop-start sequence is followed. Systemd and plain container Systemd and plain container services are independent services that can withstand a service interruption. Therefore, if you restart a high availability service such as Galera, you do not need to manually restart any other service, such as nova-api . You can use systemd or Podman to directly manage systemd and plain container services. When orchestrating your HA deployment, director uses templates and Puppet modules to ensure that all services are configured and launched correctly. In addition, when troubleshooting HA issues, you must interact with services in the HA framework using the podman command or the systemctl command. 1.1.2. Service modes HA services can run in one of the following modes: Active-active Pacemaker runs the same service on multiple Controller nodes, and uses HAProxy to distribute traffic across the nodes or to a specific Controller with a single IP address. In some cases, HAProxy distributes traffic to active-active services with Round Robin scheduling. You can add more Controller nodes to improve performance. Important Active-active mode is supported only in distributed compute node (DCN) architecture at Edge sites. Active-passive Services that are unable to run in active-active mode must run in active-passive mode. In this mode, only one instance of the service is active at a time. For example, HAProxy uses stick-table options to direct incoming Galera database connection requests to a single back-end service. This helps prevent too many simultaneous connections to the same data from multiple Galera nodes. 1.2. Planning high availability hardware assignments When you plan hardware assignments, consider the number of nodes that you want to run in your deployment, as well as the number of Virtual Machine (vm) instances that you plan to run on Compute nodes. Controller nodes Most non-storage services run on Controller nodes. All services are replicated across the three nodes and are configured as active-active or active-passive services. A high availability (HA) environment requires a minimum of three nodes. Red Hat Ceph Storage nodes Storage services run on these nodes and provide pools of Red Hat Ceph Storage areas to the Compute nodes. A minimum of three nodes are required. Compute nodes Virtual machine (VM) instances run on Compute nodes. You can deploy as many Compute nodes as you need to meet your capacity requirements, as well as migration and reboot operations. You must connect Compute nodes to the storage network and to the project network to ensure that VMs can access storage nodes, VMs on other Compute nodes, and public networks. STONITH You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. Deploying a highly available overcloud without STONITH is not supported. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters . 1.3. Planning high availability networking When you plan the virtual and physical networks, consider the provisioning network switch configuration and the external network switch configuration. In addition to the network configuration, you must deploy the following components: Provisioning network switch This switch must be able to connect the undercloud to all the physical computers in the overcloud. The NIC on each overcloud node that is connected to this switch must be able to PXE boot from the undercloud. The portfast parameter must be enabled. Controller/External network switch This switch must be configured to perform VLAN tagging for the other VLANs in the deployment. Allow only VLAN 100 traffic to external networks. Networking hardware and keystone endpoint To prevent a Controller node network card or network switch failure disrupting overcloud services availability, ensure that the keystone admin endpoint is located on a network that uses bonded network cards or networking hardware redundancy. If you move the keystone endpoint to a different network, such as internal_api , ensure that the undercloud can reach the VLAN or subnet. For more information, see the Red Hat Knowledgebase solution How to migrate Keystone Admin Endpoint to internal_api network . 1.4. Accessing the high availability environment Log in to high availability (HA) nodes to view their status and details. Prerequisites High availability is deployed and running. You must have access to the stack user on Red Hat OpenStack Platform (RHOSP) director. Procedure In a running HA environment, log in to the undercloud as the stack user. Identify the IP addresses of your overcloud nodes: Log in to one of the overcloud nodes: Replace <node_ip> with the IP address of the node that you want to log in to. 1.5. Additional resources Chapter 2, Example deployment: High availability cluster with Compute and Ceph | [
"source ~/stackrc (undercloud) USD metalsmith list +-------+------------------------+---+----------------------+---+ | ID | Name |...| Networks |...| +-------+------------------------+---+----------------------+---+ | d1... | overcloud-controller-0 |...| ctlplane=*10.200.0.11* |...|",
"(undercloud) USD ssh tripleo-admin@<node_IP>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_high_availability_services/assembly_ha-overview-planning_rhosp |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.412_release_notes/providing-direct-documentation-feedback_openjdk |
Chapter 10. Creating an XFS file system | Chapter 10. Creating an XFS file system As a system administrator, you can create an XFS file system on a block device to enable it to store files and directories. 10.1. Creating an XFS file system with mkfs.xfs This procedure describes how to create an XFS file system on a block device. Procedure To create the file system: If the device is a regular partition, an LVM volume, an MD volume, a disk, or a similar device, use the following command: Replace block-device with the path to the block device. For example, /dev/sdb1 , /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a , or /dev/my-volgroup/my-lv . In general, the default options are optimal for common use. When using mkfs.xfs on a block device containing an existing file system, add the -f option to overwrite that file system. To create the file system on a hardware RAID device, check if the system correctly detects the stripe geometry of the device: If the stripe geometry information is correct, no additional options are needed. Create the file system: If the information is incorrect, specify stripe geometry manually with the su and sw parameters of the -d option. The su parameter specifies the RAID chunk size, and the sw parameter specifies the number of data disks in the RAID device. For example: Use the following command to wait for the system to register the new device node: Additional resources mkfs.xfs(8) man page on your system | [
"mkfs.xfs block-device",
"mkfs.xfs block-device",
"mkfs.xfs -d su= 64k ,sw= 4 /dev/sda3",
"udevadm settle"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/assembly_creating-an-xfs-file-system_managing-file-systems |
Chapter 4. Managing IDE extensions | Chapter 4. Managing IDE extensions IDEs use extensions or plugins to extend their functionality, and the mechanism for managing extensions differs between IDEs. Section 4.1, "Extensions for Microsoft Visual Studio Code - Open Source" 4.1. Extensions for Microsoft Visual Studio Code - Open Source To manage extensions, this IDE uses one of these Open VSX registry instances: The embedded instance of the Open VSX registry that runs in the plugin-registry pod of OpenShift Dev Spaces to support air-gapped, offline, and proxy-restricted environments. The embedded Open VSX registry contains only a subset of the extensions published on open-vsx.org . This subset is customizable . The public open-vsx.org registry that is accessed over the internet. A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods. The default is the embedded instance of the Open VSX registry. 4.1.1. Selecting an Open VSX registry instance The default is the embedded instance of the Open VSX registry. If the default Open VSX registry instance is not what you need, you can select one of the following instances: The Open VSX registry instance at https://open-vsx.org that requires access to the internet. A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods. Procedure Edit the openVSXURL value in the CheCluster custom resource: spec: components: pluginRegistry: openVSXURL: " <url_of_an_open_vsx_registry_instance> " 1 1 For example: openVSXURL: "https://open-vsx.org" . Tip To select the embedded Open VSX registry instance in the plugin-registry pod, use openVSXURL: '' . You can customize the list of included extensions . You can also point openVSXURL at the URL of a standalone Open VSX registry instance if its URL is accessible from within your organization's cluster and not blocked by a proxy. 4.1.2. Adding or removing extensions in the embedded Open VSX registry instance You can add or remove extensions in the embedded Open VSX registry instance. This results in a custom build of the Open VSX registry that can be used in your organization's workspaces. Tip To get the latest security fixes after a OpenShift Dev Spaces update, rebuild your container based on the latest tag or SHA. Procedure Get the publisher and extension name of each chosen extension: Find the extension on the Open VSX registry website and copy the URL of the extension's listing page and extension's version. Extract the <publisher> and <extension> name from the copied URL: Tip If the extension is only available from Microsoft Visual Studio Marketplace , but not Open VSX , you can ask the extension publisher to also publish it on open-vsx.org according to these instructions , potentially using this GitHub action . If the extension publisher is unavailable or unwilling to publish the extension to open-vsx.org , and if there is no Open VSX equivalent of the extension, consider reporting an issue to the Open VSX team. Build the custom plugin registry image and update CheCluster custom resource: Tip During the build process, each extension will be verified for compatibility with the version of Visual Studio Code used in OpenShift Dev Spaces. Using OpenShift Dev Spaces instance: Login to your OpenShift Dev Spaces instance as an administrator. Create a new Red Hat Registry Service Account and copy username and token. Start a workspace using the plugin registry repository . Open a terminal and check out the Git tag that corresponds to your OpenShift Dev Spaces version (e.g., devspaces-3.15-rhel-8 ): Open the openvsx-sync.json file and add or remove extensions. Execute 1. Login to registry.redhat.io task in the workspace (Terminal Run Task... devfile 1. Login to registry.redhat.io) and login to registry.redhat.io . Execute 2. Build and Publish a Custom Plugin Registry task in the workspace (Terminal Run Task... devfile 2. Build and Publish a Custom Plugin Registry). Execute 3. Configure Che to use the Custom Plugin Registry task in the workspace (Terminal Run Task... devfile 3. Configure Che to use the Custom Plugin Registry). Using Linux operating system: Tip Podman and NodeJS version 18.20.3 or higher should be installed in the system. Download or fork and clone the Dev Spaces repository . + Go to the plugin registry submodule: + Checkout the tag that corresponds to your OpenShift Dev Spaces version (e.g., devspaces-3.15-rhel-8 ): Create a new Red Hat Registry Service Account and copy username and token. Login to registry.redhat.io : For each extension that you need to add or remove, edit the openvsx-sync.json file : To add extensions, add the publisher, name and extension version to the openvsx-sync.json file. To remove extensions, remove the publisher, name and extension version from the openvsx-sync.json file. Use the following JSON syntax: { "id": " <publisher> . <name> ", "version": " <extension_version> " } Tip If you have a closed-source extension or an extension developed only for internal use in your organization, you can add the extension directly from a .vsix file by using a URL accessible to your custom plugin registry container: { "id": " <publisher> . <name> ", "download": " <url_to_download_vsix_file> ", "version": " <extension_version> " } Read the | [
"spec: components: pluginRegistry: openVSXURL: \" <url_of_an_open_vsx_registry_instance> \" 1",
"https://open-vsx.org/extension/ <publisher> / <name>",
"git checkout devspaces-USDPRODUCT_VERSION-rhel-8",
"git clone https://github.com/redhat-developer/devspaces.git",
"cd devspaces/dependencies/che-plugin-registry/",
"git checkout devspaces-USDPRODUCT_VERSION-rhel-8",
"login registry.redhat.io",
"{ \"id\": \" <publisher> . <name> \", \"version\": \" <extension_version> \" }",
"{ \"id\": \" <publisher> . <name> \", \"download\": \" <url_to_download_vsix_file> \", \"version\": \" <extension_version> \" }",
"./build.sh -o <username> -r quay.io -t custom",
"podman push quay.io/ <username/plugin_registry:custom>",
"spec: components: pluginRegistry: deployment: containers: - image: quay.io/ <username/plugin_registry:custom> openVSXURL: ''",
"\"trustedExtensionAuthAccess\": [ \"<publisher1>.<extension1>\", \"<publisher2>.<extension2>\" ]",
"env: - name: VSCODE_TRUSTED_EXTENSIONS value: \"<publisher1>.<extension1>,<publisher2>.<extension2>\"",
"kind: ConfigMap apiVersion: v1 metadata: name: trusted-extensions labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: VSCODE_TRUSTED_EXTENSIONS: '<publisher1>.<extension1>,<publisher2>.<extension2>'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/administration_guide/managing-ide-extensions |
Chapter 2. CloudPrivateIPConfig [cloud.network.openshift.io/v1] | Chapter 2. CloudPrivateIPConfig [cloud.network.openshift.io/v1] Description CloudPrivateIPConfig performs an assignment of a private IP address to the primary NIC associated with cloud VMs. This is done by specifying the IP and Kubernetes node which the IP should be assigned to. This CRD is intended to be used by the network plugin which manages the cluster network. The spec side represents the desired state requested by the network plugin, and the status side represents the current state that this CRD's controller has executed. No users will have permission to modify it, and if a cluster-admin decides to edit it for some reason, their changes will be overwritten the time the network plugin reconciles the object. Note: the CR's name must specify the requested private IP address (can be IPv4 or IPv6). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the definition of the desired private IP request. status object status is the observed status of the desired private IP request. Read-only. 2.1.1. .spec Description spec is the definition of the desired private IP request. Type object Property Type Description node string node is the node name, as specified by the Kubernetes field: node.metadata.name 2.1.2. .status Description status is the observed status of the desired private IP request. Read-only. Type object Required conditions Property Type Description conditions array condition is the assignment condition of the private IP and its status conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } node string node is the node name, as specified by the Kubernetes field: node.metadata.name 2.1.3. .status.conditions Description condition is the assignment condition of the private IP and its status Type array 2.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.2. API endpoints The following API endpoints are available: /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs DELETE : delete collection of CloudPrivateIPConfig GET : list objects of kind CloudPrivateIPConfig POST : create a CloudPrivateIPConfig /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name} DELETE : delete a CloudPrivateIPConfig GET : read the specified CloudPrivateIPConfig PATCH : partially update the specified CloudPrivateIPConfig PUT : replace the specified CloudPrivateIPConfig /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name}/status GET : read status of the specified CloudPrivateIPConfig PATCH : partially update status of the specified CloudPrivateIPConfig PUT : replace status of the specified CloudPrivateIPConfig 2.2.1. /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CloudPrivateIPConfig Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CloudPrivateIPConfig Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a CloudPrivateIPConfig Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body CloudPrivateIPConfig schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 201 - Created CloudPrivateIPConfig schema 202 - Accepted CloudPrivateIPConfig schema 401 - Unauthorized Empty 2.2.2. /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the CloudPrivateIPConfig Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CloudPrivateIPConfig Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CloudPrivateIPConfig Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CloudPrivateIPConfig Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CloudPrivateIPConfig Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body CloudPrivateIPConfig schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 201 - Created CloudPrivateIPConfig schema 401 - Unauthorized Empty 2.2.3. /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the CloudPrivateIPConfig Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CloudPrivateIPConfig Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CloudPrivateIPConfig Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CloudPrivateIPConfig Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body CloudPrivateIPConfig schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 201 - Created CloudPrivateIPConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/cloudprivateipconfig-cloud-network-openshift-io-v1 |
Health metrics | Health metrics Red Hat Advanced Cluster Management for Kubernetes 2.11 Health metrics | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/health_metrics/index |
Chapter 1. Understanding OpenShift updates | Chapter 1. Understanding OpenShift updates 1.1. Introduction to OpenShift updates With OpenShift Container Platform 4, you can update an OpenShift Container Platform cluster with a single operation by using the web console or the OpenShift CLI ( oc ). Platform administrators can view new update options either by going to Administration Cluster Settings in the web console or by looking at the output of the oc adm upgrade command. Red Hat hosts a public OpenShift Update Service (OSUS), which serves a graph of update possibilities based on the OpenShift Container Platform release images in the official registry. The graph contains update information for any public OCP release. OpenShift Container Platform clusters are configured to connect to the OSUS by default, and the OSUS responds to clusters with information about known update targets. An update begins when either a cluster administrator or an automatic update controller edits the custom resource (CR) of the Cluster Version Operator (CVO) with a new version. To reconcile the cluster with the newly specified version, the CVO retrieves the target release image from an image registry and begins to apply changes to the cluster. Note Operators previously installed through Operator Lifecycle Manager (OLM) follow a different process for updates. See Updating installed Operators for more information. The target release image contains manifest files for all cluster components that form a specific OCP version. When updating the cluster to a new version, the CVO applies manifests in separate stages called Runlevels. Most, but not all, manifests support one of the cluster Operators. As the CVO applies a manifest to a cluster Operator, the Operator might perform update tasks to reconcile itself with its new specified version. The CVO monitors the state of each applied resource and the states reported by all cluster Operators. The CVO only proceeds with the update when all manifests and cluster Operators in the active Runlevel reach a stable condition. After the CVO updates the entire control plane through this process, the Machine Config Operator (MCO) updates the operating system and configuration of every node in the cluster. 1.1.1. Common questions about update availability There are several factors that affect if and when an update is made available to an OpenShift Container Platform cluster. The following list provides common questions regarding the availability of an update: What are the differences between each of the update channels? A new release is initially added to the candidate channel. After successful final testing, a release on the candidate channel is promoted to the fast channel, an errata is published, and the release is now fully supported. After a delay, a release on the fast channel is finally promoted to the stable channel. This delay represents the only difference between the fast and stable channels. Note For the latest z-stream releases, this delay may generally be a week or two. However, the delay for initial updates to the latest minor version may take much longer, generally 45-90 days. Releases promoted to the stable channel are simultaneously promoted to the eus channel. The primary purpose of the eus channel is to serve as a convenience for clusters performing a Control Plane Only update. Is a release on the stable channel safer or more supported than a release on the fast channel? If a regression is identified for a release on a fast channel, it will be resolved and managed to the same extent as if that regression was identified for a release on the stable channel. The only difference between releases on the fast and stable channels is that a release only appears on the stable channel after it has been on the fast channel for some time, which provides more time for new update risks to be discovered. A release that is available on the fast channel always becomes available on the stable channel after this delay. What does it mean if an update has known issues? Red Hat continuously evaluates data from multiple sources to determine whether updates from one version to another have any declared issues. Identified issues are typically documented in the version's release notes. Even if the update path has known issues, customers are still supported if they perform the update. Red Hat does not block users from updating to a certain version. Red Hat may declare conditional update risks, which may or may not apply to a particular cluster. Declared risks provide cluster administrators more context about a supported update. Cluster administrators can still accept the risk and update to that particular target version. What if I see that an update to a particular release is no longer recommended? If Red Hat removes update recommendations from any supported release due to a regression, a superseding update recommendation will be provided to a future version that corrects the regression. There may be a delay while the defect is corrected, tested, and promoted to your selected channel. How long until the z-stream release is made available on the fast and stable channels? While the specific cadence can vary based on a number of factors, new z-stream releases for the latest minor version are typically made available about every week. Older minor versions, which have become more stable over time, may take much longer for new z-stream releases to be made available. Important These are only estimates based on past data about z-stream releases. Red Hat reserves the right to change the release frequency as needed. Any number of issues could cause irregularities and delays in this release cadence. Once a z-stream release is published, it also appears in the fast channel for that minor version. After a delay, the z-stream release may then appear in that minor version's stable channel. Additional resources Understanding update channels and releases 1.1.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue related to the update path, such as incompatibility or availability. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 1.1.3. Understanding cluster Operator condition types The status of cluster Operators includes their condition type, which informs you of the current state of your Operator's health. The following definitions cover a list of some common ClusterOperator condition types. Operators that have additional condition types and use Operator-specific language have been omitted. The Cluster Version Operator (CVO) is responsible for collecting the status conditions from cluster Operators so that cluster administrators can better understand the state of the OpenShift Container Platform cluster. Available: The condition type Available indicates that an Operator is functional and available in the cluster. If the status is False , at least one part of the operand is non-functional and the condition requires an administrator to intervene. Progressing: The condition type Progressing indicates that an Operator is actively rolling out new code, propagating configuration changes, or otherwise moving from one steady state to another. Operators do not report the condition type Progressing as True when they are reconciling a known state. If the observed cluster state has changed and the Operator is reacting to it, then the status reports back as True , since it is moving from one steady state to another. Degraded: The condition type Degraded indicates that an Operator has a current state that does not match its required state over a period of time. The period of time can vary by component, but a Degraded status represents persistent observation of an Operator's condition. As a result, an Operator does not fluctuate in and out of the Degraded state. There might be a different condition type if the transition from one state to another does not persist over a long enough period to report Degraded . An Operator does not report Degraded during the course of a normal update. An Operator may report Degraded in response to a persistent infrastructure failure that requires eventual administrator intervention. Note This condition type is only an indication that something may need investigation and adjustment. As long as the Operator is available, the Degraded condition does not cause user workload failure or application downtime. Upgradeable: The condition type Upgradeable indicates whether the Operator is safe to update based on the current cluster state. The message field contains a human-readable description of what the administrator needs to do for the cluster to successfully update. The CVO allows updates when this condition is True , Unknown or missing. When the Upgradeable status is False , only minor updates are impacted, and the CVO prevents the cluster from performing impacted updates unless forced. 1.1.4. Understanding cluster version condition types The Cluster Version Operator (CVO) monitors cluster Operators and other components, and is responsible for collecting the status of both the cluster version and its Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. In addition to Available , Progressing , and Upgradeable , there are condition types that affect cluster versions and Operators. Failing: The cluster version condition type Failing indicates that a cluster cannot reach its desired state, is unhealthy, and requires an administrator to intervene. Invalid: The cluster version condition type Invalid indicates that the cluster version has an error that prevents the server from taking action. The CVO only reconciles the current state as long as this condition is set. RetrievedUpdates: The cluster version condition type RetrievedUpdates indicates whether or not available updates have been retrieved from the upstream update server. The condition is Unknown before retrieval, False if the updates either recently failed or could not be retrieved, or True if the availableUpdates field is both recent and accurate. ReleaseAccepted: The cluster version condition type ReleaseAccepted with a True status indicates that the requested release payload was successfully loaded without failure during image verification and precondition checking. ImplicitlyEnabledCapabilities: The cluster version condition type ImplicitlyEnabledCapabilities with a True status indicates that there are enabled capabilities that the user is not currently requesting through spec.capabilities . The CVO does not support disabling capabilities if any associated resources were previously managed by the CVO. 1.1.5. Common terms Control plane The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. Cluster Version Operator The Cluster Version Operator (CVO) starts the update process for the cluster. It checks with OSUS based on the current cluster version and retrieves the graph which contains available or possible update paths. Machine Config Operator The Machine Config Operator (MCO) is a cluster-level Operator that manages the operating system and machine configurations. Through the MCO, platform administrators can configure and update systemd, CRI-O and Kubelet, the kernel, NetworkManager, and other system features on the worker nodes. OpenShift Update Service The OpenShift Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including to Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. Channels Channels declare an update strategy tied to minor versions of OpenShift Container Platform. The OSUS uses this configured strategy to recommend update edges consistent with that strategy. Recommended update edge A recommended update edge is a recommended update between OpenShift Container Platform releases. Whether a given update is recommended can depend on the cluster's configured channel, current version, known bugs, and other information. OSUS communicates the recommended edges to the CVO, which runs in every cluster. Additional resources Machine Config Overview Using the OpenShift Update Service in a disconnected environment Update channels 1.1.6. Additional resources How cluster updates work . 1.2. How cluster updates work The following sections describe each major aspect of the OpenShift Container Platform (OCP) update process in detail. For a general overview of how updates work, see the Introduction to OpenShift updates . 1.2.1. The Cluster Version Operator The Cluster Version Operator (CVO) is the primary component that orchestrates and facilitates the OpenShift Container Platform update process. During installation and standard cluster operation, the CVO is constantly comparing the manifests of managed cluster Operators to in-cluster resources, and reconciling discrepancies to ensure that the actual state of these resources match their desired state. 1.2.1.1. The ClusterVersion object One of the resources that the Cluster Version Operator (CVO) monitors is the ClusterVersion resource. Administrators and OpenShift components can communicate or interact with the CVO through the ClusterVersion object. The desired CVO state is declared through the ClusterVersion object and the current CVO state is reflected in the object's status. Note Do not directly modify the ClusterVersion object. Instead, use interfaces such as the oc CLI or the web console to declare your update target. The CVO continually reconciles the cluster with the target state declared in the spec property of the ClusterVersion resource. When the desired release differs from the actual release, that reconciliation updates the cluster. Update availability data The ClusterVersion resource also contains information about updates that are available to the cluster. This includes updates that are available, but not recommended due to a known risk that applies to the cluster. These updates are known as conditional updates. To learn how the CVO maintains this information about available updates in the ClusterVersion resource, see the "Evaluation of update availability" section. You can inspect all available updates with the following command: USD oc adm upgrade --include-not-recommended Note The additional --include-not-recommended parameter includes updates that are available with known issues that apply to the cluster. Example output Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 The oc adm upgrade command queries the ClusterVersion resource for information about available updates and presents it in a human-readable format. One way to directly inspect the underlying availability data created by the CVO is by querying the ClusterVersion resource with the following command: USD oc get clusterversion version -o json | jq '.status.availableUpdates' Example output [ { "channels": [ "candidate-4.11", "candidate-4.12", "fast-4.11", "fast-4.12" ], "image": "quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775", "url": "https://access.redhat.com/errata/RHBA-2023:3213", "version": "4.11.41" }, ... ] A similar command can be used to check conditional updates: USD oc get clusterversion version -o json | jq '.status.conditionalUpdates' Example output [ { "conditions": [ { "lastTransitionTime": "2023-05-30T16:28:59Z", "message": "The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136", "reason": "PatchesOlderRelease", "status": "False", "type": "Recommended" } ], "release": { "channels": [...], "image": "quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d", "url": "https://access.redhat.com/errata/RHBA-2023:1733", "version": "4.11.36" }, "risks": [...] }, ... ] 1.2.1.2. Evaluation of update availability The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update possibilities. This data is based on the cluster's subscribed channel. The CVO then saves information about update recommendations into either the availableUpdates or conditionalUpdates field of its ClusterVersion resource. The CVO periodically checks the conditional updates for update risks. These risks are conveyed through the data served by the OSUS, which contains information for each version about known issues that might affect a cluster updated to that version. Most risks are limited to clusters with specific characteristics, such as clusters with a certain size or clusters that are deployed in a particular cloud platform. The CVO continuously evaluates its cluster characteristics against the conditional risk information for each conditional update. If the CVO finds that the cluster matches the criteria, the CVO stores this information in the conditionalUpdates field of its ClusterVersion resource. If the CVO finds that the cluster does not match the risks of an update, or that there are no risks associated with the update, it stores the target version in the availableUpdates field of its ClusterVersion resource. The user interface, either the web console or the OpenShift CLI ( oc ), presents this information in sectioned headings to the administrator. Each known issue associated with the update path contains a link to further resources about the risk so that the administrator can make an informed decision about the update. Additional resources Update recommendation removals and Conditional Updates 1.2.2. Release images A release image is the delivery mechanism for a specific OpenShift Container Platform (OCP) version. It contains the release metadata, a Cluster Version Operator (CVO) binary matching the release version, every manifest needed to deploy individual OpenShift cluster Operators, and a list of SHA digest-versioned references to all container images that make up this OpenShift version. You can inspect the content of a specific release image by running the following command: USD oc adm release extract <release image> Example output USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z USD ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 ... 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata 1 Manifest for ClusterResourceQuota CRD, to be applied on Runlevel 03 2 Manifest for PrometheusRoleBinding resource for the service-ca-operator , to be applied on Runlevel 90 3 List of SHA digest-versioned references to all required images 1.2.3. Update process workflow The following steps represent a detailed workflow of the OpenShift Container Platform (OCP) update process: The target version is stored in the spec.desiredUpdate.version field of the ClusterVersion resource, which may be managed through the web console or the CLI. The Cluster Version Operator (CVO) detects that the desiredUpdate in the ClusterVersion resource differs from the current cluster version. Using graph data from the OpenShift Update Service, the CVO resolves the desired cluster version to a pull spec for the release image. The CVO validates the integrity and authenticity of the release image. Red Hat publishes cryptographically-signed statements about published release images at predefined locations by using image SHA digests as unique and immutable release image identifiers. The CVO utilizes a list of built-in public keys to validate the presence and signatures of the statement matching the checked release image. The CVO creates a job named version-USDversion-USDhash in the openshift-cluster-version namespace. This job uses containers that are executing the release image, so the cluster downloads the image through the container runtime. The job then extracts the manifests and metadata from the release image to a shared volume that is accessible to the CVO. The CVO validates the extracted manifests and metadata. The CVO checks some preconditions to ensure that no problematic condition is detected in the cluster. Certain conditions can prevent updates from proceeding. These conditions are either determined by the CVO itself, or reported by individual cluster Operators that detect some details about the cluster that the Operator considers problematic for the update. The CVO records the accepted release in status.desired and creates a status.history entry about the new update. The CVO begins reconciling the manifests from the release image. Cluster Operators are updated in separate stages called Runlevels, and the CVO ensures that all Operators in a Runlevel finish updating before it proceeds to the level. Manifests for the CVO itself are applied early in the process. When the CVO deployment is applied, the current CVO pod stops, and a CVO pod that uses the new version starts. The new CVO proceeds to reconcile the remaining manifests. The update proceeds until the entire control plane is updated to the new version. Individual cluster Operators might perform update tasks on their domain of the cluster, and while they do so, they report their state through the Progressing=True condition. The Machine Config Operator (MCO) manifests are applied towards the end of the process. The updated MCO then begins updating the system configuration and operating system of every node. Each node might be drained, updated, and rebooted before it starts to accept workloads again. The cluster reports as updated after the control plane update is finished, usually before all nodes are updated. After the update, the CVO maintains all cluster resources to match the state delivered in the release image. 1.2.4. Understanding how manifests are applied during an update Some manifests supplied in a release image must be applied in a certain order because of the dependencies between them. For example, the CustomResourceDefinition resource must be created before the matching custom resources. Additionally, there is a logical order in which the individual cluster Operators must be updated to minimize disruption in the cluster. The Cluster Version Operator (CVO) implements this logical order through the concept of Runlevels. These dependencies are encoded in the filenames of the manifests in the release image: 0000_<runlevel>_<component>_<manifest-name>.yaml For example: 0000_03_config-operator_01_proxy.crd.yaml The CVO internally builds a dependency graph for the manifests, where the CVO obeys the following rules: During an update, manifests at a lower Runlevel are applied before those at a higher Runlevel. Within one Runlevel, manifests for different components can be applied in parallel. Within one Runlevel, manifests for a single component are applied in lexicographic order. The CVO then applies manifests following the generated dependency graph. Note For some resource types, the CVO monitors the resource after its manifest is applied, and considers it to be successfully updated only after the resource reaches a stable state. Achieving this state can take some time. This is especially true for ClusterOperator resources, while the CVO waits for a cluster Operator to update itself and then update its ClusterOperator status. The CVO waits until all cluster Operators in the Runlevel meet the following conditions before it proceeds to the Runlevel: The cluster Operators have an Available=True condition. The cluster Operators have a Degraded=False condition. The cluster Operators declare they have achieved the desired version in their ClusterOperator resource. Some actions can take significant time to finish. The CVO waits for the actions to complete in order to ensure the subsequent Runlevels can proceed safely. Initially reconciling the new release's manifests is expected to take 60 to 120 minutes in total; see Understanding OpenShift Container Platform update duration for more information about factors that influence update duration. In the example diagram, the CVO is waiting until all work is completed at Runlevel 20. The CVO has applied all manifests to the Operators in the Runlevel, but the kube-apiserver-operator ClusterOperator performs some actions after its new version was deployed. The kube-apiserver-operator ClusterOperator declares this progress through the Progressing=True condition and by not declaring the new version as reconciled in its status.versions . The CVO waits until the ClusterOperator reports an acceptable status, and then it will start reconciling manifests at Runlevel 25. Additional resources Understanding OpenShift Container Platform update duration 1.2.5. Understanding how the Machine Config Operator updates nodes The Machine Config Operator (MCO) applies a new machine configuration to each control plane node and compute node. During the machine configuration update, control plane nodes and compute nodes are organized into their own machine config pools, where the pools of machines are updated in parallel. The .spec.maxUnavailable parameter, which has a default value of 1 , determines how many nodes in a machine config pool can simultaneously undergo the update process. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. When the machine configuration update process begins, the MCO checks the amount of currently unavailable nodes in a pool. If there are fewer unavailable nodes than the value of .spec.maxUnavailable , the MCO initiates the following sequence of actions on available nodes in the pool: Cordon and drain the node Note When a node is cordoned, workloads cannot be scheduled to it. Update the system configuration and operating system (OS) of the node Reboot the node Uncordon the node A node undergoing this process is unavailable until it is uncordoned and workloads can be scheduled to it again. The MCO begins updating nodes until the number of unavailable nodes is equal to the value of .spec.maxUnavailable . As a node completes its update and becomes available, the number of unavailable nodes in the machine config pool is once again fewer than .spec.maxUnavailable . If there are remaining nodes that need to be updated, the MCO initiates the update process on a node until the .spec.maxUnavailable limit is once again reached. This process repeats until each control plane node and compute node has been updated. The following example workflow describes how this process might occur in a machine config pool with 5 nodes, where .spec.maxUnavailable is 3 and all nodes are initially available: The MCO cordons nodes 1, 2, and 3, and begins to drain them. Node 2 finishes draining, reboots, and becomes available again. The MCO cordons node 4 and begins draining it. Node 1 finishes draining, reboots, and becomes available again. The MCO cordons node 5 and begins draining it. Node 3 finishes draining, reboots, and becomes available again. Node 5 finishes draining, reboots, and becomes available again. Node 4 finishes draining, reboots, and becomes available again. Because the update process for each node is independent of other nodes, some nodes in the example above finish their update out of the order in which they were cordoned by the MCO. You can check the status of the machine configuration update by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Additional resources Machine Config Overview 1.3. Understanding update channels and releases Update channels are the mechanism by which users declare the OpenShift Container Platform minor version they intend to update their clusters to. They also allow users to choose the timing and level of support their updates will have through the fast , stable , candidate , and eus channel options. The Cluster Version Operator uses an update graph based on the channel declaration, along with other conditional information, to provide a list of recommended and conditional updates available to the cluster. Update channels correspond to a minor version of OpenShift Container Platform. The version number in the channel represents the target minor version that the cluster will eventually be updated to, even if it is higher than the cluster's current minor version. For instance, OpenShift Container Platform 4.10 update channels provide the following recommendations: Updates within 4.10. Updates within 4.9. Updates from 4.9 to 4.10, allowing all 4.9 clusters to eventually update to 4.10, even if they do not immediately meet the minimum z-stream version requirements. eus-4.10 only: updates within 4.8. eus-4.10 only: updates from 4.8 to 4.9 to 4.10, allowing all 4.8 clusters to eventually update to 4.10. 4.10 update channels do not recommend updates to 4.11 or later releases. This strategy ensures that administrators must explicitly decide to update to the minor version of OpenShift Container Platform. Update channels control only release selection and do not impact the version of the cluster that you install. The openshift-install binary file for a specific version of OpenShift Container Platform always installs that version. OpenShift Container Platform 4.16 offers the following update channels: stable-4.16 eus-4.y (only offered for EUS versions and meant to facilitate updates between EUS versions) fast-4.16 candidate-4.16 If you do not want the Cluster Version Operator to fetch available updates from the update recommendation service, you can use the oc adm upgrade channel command in the OpenShift CLI to configure an empty channel. This configuration can be helpful if, for example, a cluster has restricted network access and there is no local, reachable update recommendation service. Warning Red Hat recommends updating to versions suggested by OpenShift Update Service only. For a minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions. 1.3.1. Update channels 1.3.1.1. fast-4.16 channel The fast-4.16 channel is updated with new versions of OpenShift Container Platform 4.16 as soon as Red Hat declares the version as a general availability (GA) release. As such, these releases are fully supported and purposed to be used in production environments. 1.3.1.2. stable-4.16 channel While the fast-4.16 channel contains releases as soon as their errata are published, releases are added to the stable-4.16 channel after a delay. During this delay, data is collected from multiple sources and analyzed for indications of product regressions. Once a significant number of data points have been collected, these releases are added to the stable channel. Note Since the time required to obtain a significant number of data points varies based on many factors, Service LeveL Objective (SLO) is not offered for the delay duration between fast and stable channels. For more information, please see "Choosing the correct channel for your cluster" Newly installed clusters default to using stable channels. 1.3.1.3. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Container Platform offer Extended Update Support (EUS). Releases promoted to the stable channel are also simultaneously promoted to the EUS channels. The primary purpose of the EUS channels is to serve as a convenience for clusters performing a Control Plane Only update. Note Both standard and non-EUS subscribers can access all EUS repositories and necessary RPMs ( rhel-*-eus-rpms ) to be able to support critical purposes such as debugging and building drivers. 1.3.1.4. candidate-4.16 channel The candidate-4.16 channel offers unsupported early access to releases as soon as they are built. Releases present only in candidate channels may not contain the full feature set of eventual GA releases or features may be removed prior to GA. Additionally, these releases have not been subject to full Red Hat Quality Assurance and may not offer update paths to later GA releases. Given these caveats, the candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable. 1.3.1.5. Update recommendations in the channel OpenShift Container Platform maintains an update recommendation service that knows your installed OpenShift Container Platform version and the path to take within the channel to get you to the release. Update paths are also limited to versions relevant to your currently selected channel and its promotion characteristics. You can imagine seeing the following releases in your channel: 4.16.0 4.16.1 4.16.3 4.16.4 The service recommends only updates that have been tested and have no known serious regressions. For example, if your cluster is on 4.16.1 and OpenShift Container Platform suggests 4.16.4, then it is recommended to update from 4.16.1 to 4.16.4. Important Do not rely on consecutive patch numbers. In this example, 4.16.2 is not and never was available in the channel, therefore updates to 4.16.2 are not recommended or supported. 1.3.1.6. Update recommendations and Conditional Updates Red Hat monitors newly released versions and update paths associated with those versions before and after they are added to supported channels. If Red Hat removes update recommendations from any supported release, a superseding update recommendation will be provided to a future version that corrects the regression. There may however be a delay while the defect is corrected, tested, and promoted to your selected channel. Beginning in OpenShift Container Platform 4.10, when update risks are confirmed, they are declared as Conditional Update risks for the relevant updates. Each known risk may apply to all clusters or only clusters matching certain conditions. Some examples include having the Platform set to None or the CNI provider set to OpenShiftSDN . The Cluster Version Operator (CVO) continually evaluates known risks against the current cluster state. If no risks match, the update is recommended. If the risk matches, those update paths are labeled as updates with known issues , and a reference link to the known issues is provided. The reference link helps the cluster admin decide if they want to accept the risk and continue to update their cluster. When Red Hat chooses to declare Conditional Update risks, that action is taken in all relevant channels simultaneously. Declaration of a Conditional Update risk may happen either before or after the update has been promoted to supported channels. 1.3.1.7. Choosing the correct channel for your cluster Choosing the appropriate channel involves two decisions. First, select the minor version you want for your cluster update. Selecting a channel which matches your current version ensures that you only apply z-stream updates and do not receive feature updates. Selecting an available channel which has a version greater than your current version will ensure that after one or more updates your cluster will have updated to that version. Your cluster will only be offered channels which match its current version, the version, or the EUS version. Note Due to the complexity involved in planning updates between versions many minors apart, channels that assist in planning updates beyond a single Control Plane Only update are not offered. Second, you should choose your desired rollout strategy. You may choose to update as soon as Red Hat declares a release GA by selecting from fast channels or you may want to wait for Red Hat to promote releases to the stable channel. Update recommendations offered in the fast-4.16 and stable-4.16 are both fully supported and benefit equally from ongoing data analysis. The promotion delay before promoting a release to the stable channel represents the only difference between the two channels. Updates to the latest z-streams are generally promoted to the stable channel within a week or two, however the delay when initially rolling out updates to the latest minor is much longer, generally 45-90 days. Please consider the promotion delay when choosing your desired channel, as waiting for promotion to the stable channel may affect your scheduling plans. Additionally, there are several factors which may lead an organization to move clusters to the fast channel either permanently or temporarily including: The desire to apply a specific fix known to affect your environment without delay. Application of CVE fixes without delay. CVE fixes may introduce regressions, so promotion delays still apply to z-streams with CVE fixes. Internal testing processes. If it takes your organization several weeks to qualify releases it is best test concurrently with our promotion process rather than waiting. This also assures that any telemetry signal provided to Red Hat is a factored into our rollout, so issues relevant to you can be fixed faster. 1.3.1.8. Restricted network clusters If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact updates. During an update, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings. 1.3.1.9. Switching between channels A channel can be switched from the web console or through the adm upgrade channel command: USD oc adm upgrade channel <channel> The web console will display an alert if you switch to a channel that does not include the current release. The web console does not recommend any updates while on a channel without the current release. You can return to the original channel at any point, however. Changing your channel might impact the supportability of your cluster. The following conditions might apply: Your cluster is still supported if you change from the stable-4.16 channel to the fast-4.16 channel. You can switch to the candidate-4.16 channel at any time, but some releases for this channel might be unsupported. You can switch from the candidate-4.16 channel to the fast-4.16 channel if your current release is a general availability release. You can always switch from the fast-4.16 channel to the stable-4.16 channel. There is a possible delay of up to a day for the release to be promoted to stable-4.16 if the current release was recently promoted. Additional resources Updating along a conditional upgrade path Choosing the correct channel for your cluster 1.4. Understanding OpenShift Container Platform update duration OpenShift Container Platform update duration varies based on the deployment topology. This page helps you understand the factors that affect update duration and estimate how long the cluster update takes in your environment. 1.4.1. Factors affecting update duration The following factors can affect your cluster update duration: The reboot of compute nodes to the new machine configuration by Machine Config Operator (MCO) The value of MaxUnavailable in the machine config pool Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. The minimum number or percentages of replicas set in pod disruption budget (PDB) The number of nodes in the cluster The health of the cluster nodes 1.4.2. Cluster update phases In OpenShift Container Platform, the cluster update happens in two phases: Cluster Version Operator (CVO) target update payload deployment Machine Config Operator (MCO) node updates 1.4.2.1. Cluster Version Operator target update payload deployment The Cluster Version Operator (CVO) retrieves the target update release image and applies to the cluster. All components which run as pods are updated during this phase, whereas the host components are updated by the Machine Config Operator (MCO). This process might take 60 to 120 minutes. Note The CVO phase of the update does not restart the nodes. 1.4.2.2. Machine Config Operator node updates The Machine Config Operator (MCO) applies a new machine configuration to each control plane and compute node. During this process, the MCO performs the following sequential actions on each node of the cluster: Cordon and drain all the nodes Update the operating system (OS) Reboot the nodes Uncordon all nodes and schedule workloads on the node Note When a node is cordoned, workloads cannot be scheduled to it. The time to complete this process depends on several factors including the node and infrastructure configuration. This process might take 5 or more minutes to complete per node. In addition to MCO, you should consider the impact of the following parameters: The control plane node update duration is predictable and oftentimes shorter than compute nodes, because the control plane workloads are tuned for graceful updates and quick drains. You can update the compute nodes in parallel by setting the maxUnavailable field to greater than 1 in the Machine Config Pool (MCP). The MCO cordons the number of nodes specified in maxUnavailable and marks them unavailable for update. When you increase maxUnavailable on the MCP, it can help the pool to update more quickly. However, if maxUnavailable is set too high, and several nodes are cordoned simultaneously, the pod disruption budget (PDB) guarded workloads could fail to drain because a schedulable node cannot be found to run the replicas. If you increase maxUnavailable for the MCP, ensure that you still have sufficient schedulable nodes to allow PDB guarded workloads to drain. Before you begin the update, you must ensure that all the nodes are available. Any unavailable nodes can significantly impact the update duration because the node unavailability affects the maxUnavailable and pod disruption budgets. To check the status of nodes from the terminal, run the following command: USD oc get node Example Output NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb If the status of the node is NotReady or SchedulingDisabled , then the node is not available and this impacts the update duration. You can check the status of nodes from the Administrator perspective in the web console by expanding Compute Nodes . Additional resources Machine Config Overview Pod disruption budget 1.4.2.3. Example update duration of cluster Operators The diagram shows an example of the time that cluster Operators might take to update to their new versions. The example is based on a three-node AWS OVN cluster, which has a healthy compute MachineConfigPool and no workloads that take long to drain, updating from 4.13 to 4.14. Note The specific update duration of a cluster and its Operators can vary based on several cluster characteristics, such as the target version, the amount of nodes, and the types of workloads scheduled to the nodes. Some Operators, such as the Cluster Version Operator, update themselves in a short amount of time. These Operators have either been omitted from the diagram or are included in the broader group of Operators labeled "Other Operators in parallel". Each cluster Operator has characteristics that affect the time it takes to update itself. For instance, the Kube API Server Operator in this example took more than eleven minutes to update because kube-apiserver provides graceful termination support, meaning that existing, in-flight requests are allowed to complete gracefully. This might result in a longer shutdown of the kube-apiserver . In the case of this Operator, update speed is sacrificed to help prevent and limit disruptions to cluster functionality during an update. Another characteristic that affects the update duration of an Operator is whether the Operator utilizes DaemonSets. The Network and DNS Operators utilize full-cluster DaemonSets, which can take time to roll out their version changes, and this is one of several reasons why these Operators might take longer to update themselves. The update duration for some Operators is heavily dependent on characteristics of the cluster itself. For instance, the Machine Config Operator update applies machine configuration changes to each node in the cluster. A cluster with many nodes has a longer update duration for the Machine Config Operator compared to a cluster with fewer nodes. Note Each cluster Operator is assigned a stage during which it can be updated. Operators within the same stage can update simultaneously, and Operators in a given stage cannot begin updating until all stages have been completed. For more information, see "Understanding how manifests are applied during an update" in the "Additional resources" section. Additional resources Introduction to OpenShift updates Understanding how manifests are applied during an update 1.4.3. Estimating cluster update time Historical update duration of similar clusters provides you the best estimate for the future cluster updates. However, if the historical data is not available, you can use the following convention to estimate your cluster update time: A node update iteration consists of one or more nodes updated in parallel. The control plane nodes are always updated in parallel with the compute nodes. In addition, one or more compute nodes can be updated in parallel based on the maxUnavailable value. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. For example, to estimate the update time, consider an OpenShift Container Platform cluster with three control plane nodes and six compute nodes and each host takes about 5 minutes to reboot. Note The time it takes to reboot a particular node varies significantly. In cloud instances, the reboot might take about 1 to 2 minutes, whereas in physical bare metal hosts the reboot might take more than 15 minutes. Scenario-1 When you set maxUnavailable to 1 for both the control plane and compute nodes Machine Config Pool (MCP), then all the six compute nodes will update one after another in each iteration: Scenario-2 When you set maxUnavailable to 2 for the compute node MCP, then two compute nodes will update in parallel in each iteration. Therefore it takes total three iterations to update all the nodes. Important The default setting for maxUnavailable is 1 for all the MCPs in OpenShift Container Platform. It is recommended that you do not change the maxUnavailable in the control plane MCP. 1.4.4. Red Hat Enterprise Linux (RHEL) compute nodes Red Hat Enterprise Linux (RHEL) compute nodes require an additional usage of openshift-ansible to update node binary components. The actual time spent updating RHEL compute nodes should not be significantly different from Red Hat Enterprise Linux CoreOS (RHCOS) compute nodes. Additional resources Updating RHEL compute machines 1.4.5. Additional resources OpenShift Container Platform architecture OpenShift Container Platform updates | [
"oc adm upgrade --include-not-recommended",
"Cluster version is 4.13.40 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.13, candidate-4.14, eus-4.14, fast-4.13, fast-4.14, stable-4.13, stable-4.14) Recommended updates: VERSION IMAGE 4.14.27 quay.io/openshift-release-dev/ocp-release@sha256:4d30b359aa6600a89ed49ce6a9a5fdab54092bcb821a25480fdfbc47e66af9ec 4.14.26 quay.io/openshift-release-dev/ocp-release@sha256:4fe7d4ccf4d967a309f83118f1a380a656a733d7fcee1dbaf4d51752a6372890 4.14.25 quay.io/openshift-release-dev/ocp-release@sha256:a0ef946ef8ae75aef726af1d9bbaad278559ad8cab2c1ed1088928a0087990b6 4.14.24 quay.io/openshift-release-dev/ocp-release@sha256:0a34eac4b834e67f1bca94493c237e307be2c0eae7b8956d4d8ef1c0c462c7b0 4.14.23 quay.io/openshift-release-dev/ocp-release@sha256:f8465817382128ec7c0bc676174bad0fb43204c353e49c146ddd83a5b3d58d92 4.13.42 quay.io/openshift-release-dev/ocp-release@sha256:dcf5c3ad7384f8bee3c275da8f886b0bc9aea7611d166d695d0cf0fff40a0b55 4.13.41 quay.io/openshift-release-dev/ocp-release@sha256:dbb8aa0cf53dc5ac663514e259ad2768d8c82fd1fe7181a4cfb484e3ffdbd3ba Updates with known issues: Version: 4.14.22 Image: quay.io/openshift-release-dev/ocp-release@sha256:7093fa606debe63820671cc92a1384e14d0b70058d4b4719d666571e1fc62190 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 18.061ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689 Version: 4.14.21 Image: quay.io/openshift-release-dev/ocp-release@sha256:6e3fba19a1453e61f8846c6b0ad3abf41436a3550092cbfd364ad4ce194582b7 Reason: MultipleReasons Message: Exposure to AzureRegistryImageMigrationUserProvisioned is unknown due to an evaluation failure: client-side throttling: only 33.991ms has elapsed since the last match call completed for this cluster condition backend; this cached cluster condition request has been queued for later execution In Azure clusters with the user-provisioned registry storage, the in-cluster image registry component may struggle to complete the cluster update. https://issues.redhat.com/browse/IR-468 Incoming HTTP requests to services exposed by Routes may fail while routers reload their configuration, especially when made with Apache HTTPClient versions before 5.0. The problem is more likely to occur in clusters with higher number of Routes and corresponding endpoints. https://issues.redhat.com/browse/NE-1689",
"oc get clusterversion version -o json | jq '.status.availableUpdates'",
"[ { \"channels\": [ \"candidate-4.11\", \"candidate-4.12\", \"fast-4.11\", \"fast-4.12\" ], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:400267c7f4e61c6bfa0a59571467e8bd85c9188e442cbd820cc8263809be3775\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:3213\", \"version\": \"4.11.41\" }, ]",
"oc get clusterversion version -o json | jq '.status.conditionalUpdates'",
"[ { \"conditions\": [ { \"lastTransitionTime\": \"2023-05-30T16:28:59Z\", \"message\": \"The 4.11.36 release only resolves an installation issue https://issues.redhat.com//browse/OCPBUGS-11663 , which does not affect already running clusters. 4.11.36 does not include fixes delivered in recent 4.11.z releases and therefore upgrading from these versions would cause fixed bugs to reappear. Red Hat does not recommend upgrading clusters to 4.11.36 version for this reason. https://access.redhat.com/solutions/7007136\", \"reason\": \"PatchesOlderRelease\", \"status\": \"False\", \"type\": \"Recommended\" } ], \"release\": { \"channels\": [...], \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:8c04176b771a62abd801fcda3e952633566c8b5ff177b93592e8e8d2d1f8471d\", \"url\": \"https://access.redhat.com/errata/RHBA-2023:1733\", \"version\": \"4.11.36\" }, \"risks\": [...] }, ]",
"oc adm release extract <release image>",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.12.6-x86_64 Extracted release payload from digest sha256:800d1e39d145664975a3bb7cbc6e674fbf78e3c45b5dde9ff2c5a11a8690c87b created at 2023-03-01T12:46:29Z ls 0000_03_authorization-openshift_01_rolebindingrestriction.crd.yaml 0000_03_config-operator_01_proxy.crd.yaml 0000_03_marketplace-operator_01_operatorhub.crd.yaml 0000_03_marketplace-operator_02_operatorhub.cr.yaml 0000_03_quota-openshift_01_clusterresourcequota.crd.yaml 1 0000_90_service-ca-operator_02_prometheusrolebinding.yaml 2 0000_90_service-ca-operator_03_servicemonitor.yaml 0000_99_machine-api-operator_00_tombstones.yaml image-references 3 release-metadata",
"0000_<runlevel>_<component>_<manifest-name>.yaml",
"0000_03_config-operator_01_proxy.crd.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker rendered-worker-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"oc adm upgrade channel <channel>",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb",
"Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)",
"Cluster update time = 60 + (6 x 5) = 90 minutes",
"Cluster update time = 60 + (3 x 5) = 75 minutes"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/updating_clusters/understanding-openshift-updates-1 |
Power Management Guide | Power Management Guide Red Hat Enterprise Linux 7 Managing and optimizing power consumption on RHEL 7 Red Hat, Inc. Marie Dolezelova Red Hat Customer Content Services [email protected] Jana Heves Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services Don Domingo Red Hat Customer Content Services Rudiger Landmann Red Hat Customer Content Services Jack Reed Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/index |
Chapter 1. Introduction | Chapter 1. Introduction You can use host-based subscriptions for Red Hat Enterprise Linux virtual machines in the following virtualization platforms: Red Hat Virtualization Red Hat Enterprise Linux Virtualization (KVM) Red Hat OpenStack Platform VMware vSphere Microsoft Hyper-V 1.1. Host-based Subscriptions Virtual machines can use host-based subscriptions instead of consuming entitlements from physical subscriptions. A host-based subscription is attached to a hypervisor and entitles it to provide subscriptions to its virtual machines. Many host-based subscriptions provide entitlements for unlimited virtual machines. To allow virtual machines to inherit subscriptions from their hypervisors, you must install and configure virt-who. Virt-who queries the virtualization platform and reports hypervisor and virtual machine information to Red Hat Satellite. When a virtual machine is registered with an activation key that has no subscriptions attached and auto-attach set to true , and sufficient host-based subscriptions are available, one of the following behaviors occurs: If the virtual machine has been reported by virt-who and a host-based subscription is attached to the hypervisor, the virtual machine inherits a subscription from the hypervisor. If the virtual machine has been reported by virt-who, and the hypervisor is registered to Satellite but does not have a host-based subscription attached, a host-based subscription is attached to the hypervisor and inherited by the virtual machine. If the virtual machine, or its hypervisor, has not been reported by virt-who, Satellite grants the virtual machine a temporary subscription, valid for up to seven days. After virt-who reports updated information, Satellite can determine which hypervisor the virtual machine is running on and attach a permanent subscription to the virtual machine. If auto-attach is enabled, but virt-who is not running or there are no host-based subscriptions available, Satellite attaches physical subscriptions to the virtual machines instead, which might consume more entitlements than intended. If auto-attach is not enabled, virtual machines cannot use host-based subscriptions. To see if a subscription requires virt-who, in the Satellite web UI, navigate to Content > Subscriptions . If there is a tick in the Requires Virt-Who column, you must configure virt-who to use that subscription. Virtual machine subscription process This diagram shows the subscription workflow when a virtual machine has not yet been reported by virt-who: Satellite provisions a virtual machine. The virtual machine requests a subscription from Satellite Server. Satellite Server grants the virtual machine a temporary subscription, valid for a maximum of seven days, while it determines which hypervisor the virtual machine belongs to. Virt-who connects to the hypervisor or virtualization manager and requests information about its virtual machines. The hypervisor or virtualization manager returns a list of its virtual machines to virt-who, including each UUID. Virt-who reports the list of virtual machines and their hypervisors to Satellite Server. Satellite Server attaches a permanent subscription to the virtual machine, if sufficient entitlements are available. Additional resources For more information about the Red Hat subscription model, see Introduction to Red Hat Subscription Management Workflows . 1.2. Configuration Overview To allow virtual machines to inherit subscriptions from their hypervisors, complete the following steps: Prerequisites Import a Subscription Manifest that includes a host-based subscription into Satellite Server. For more information, see Importing a Red Hat Subscription Manifest into Satellite Server in Managing Content . Ensure you have sufficient entitlements for the host-based subscription to cover all of the hypervisors you plan to use. If you are using Microsoft Hyper-V, enable remote management on the hypervisors. Create a user with read-only access and a non-expiring password on each hypervisor or virtualization manager. Virt-who uses this account to retrieve the list of virtual machines to report to Satellite Server. For Red Hat products and Microsoft Hyper-V, create a virt-who user on each hypervisor that runs Red Hat Enterprise Linux virtual machines. For VMware vSphere, create a virt-who user on the vCenter Server. The virt-who user requires at least read-only access to all objects in the vCenter Data Center. Procedure Overview Section 1.3, "Virt-who Configuration for Each Virtualization Platform" . Use the table in this section to plan how to configure and deploy virt-who for your virtualization platform. Chapter 2, Creating an Activation Key for Virtual Machines . Create an activation key with auto-attach enabled and no subscriptions attached. Chapter 3, Attaching a Host-based Subscription to Hypervisors . Attach a host-based subscription to all of the hypervisors you plan to use. Chapter 4, Creating a virt-who Configuration . Create a virt-who configuration for each hypervisor or virtualization manager. Chapter 5, Deploying a virt-who Configuration . Deploy the virt-who configurations using the scripts generated by Satellite. Chapter 6, Registering Virtual Machines to use a Host-based Subscription . Register the virtual machines using the auto-attach activation key. 1.3. Virt-who Configuration for Each Virtualization Platform Virt-who is configured using files that specify details such as the virtualization type and the hypervisor or virtualization manager to query. The supported configuration is different for each virtualization platform. Typical virt-who configuration file This example shows a typical virt-who configuration file created using the Satellite web UI or Hammer CLI: The type and server values depend on the virtualization platform. The following table provides more detail. The username refers to a read-only user on the hypervisor or virtualization manager, which you must create before configuring virt-who. The rhsm-username refers to an automatically generated user that only has permissions for virt-who reporting to Satellite Server. Required configuration for each virtualization platform Use this table to plan your virt-who configuration: Supported virtualization platform Type specified in the configuration file Server specified in the configuration file Server where the configuration file is deployed Red Hat Virtualization RHEL Virtualization (KVM) Red Hat OpenStack Platform libvirt Hypervisor (one file for each hypervisor) Each hypervisor VMware vSphere esx vCenter Server Satellite Server, Capsule Server, or a dedicated RHEL server Microsoft Hyper-V hyperv Hyper-V hypervisor (one file for each hypervisor) Satellite Server, Capsule Server, or a dedicated RHEL server Example virt-who configuration files Example virt-who configuration files for several common hypervisor types are shown. Example OpenStack virt-who configuration Example KVM virt-who configuration Example VMware virt-who configuration Important The rhevm and xen hypervisor types are not supported. The kubevirt hypervisor type is provided as a Technology Preview only. | [
"[virt-who-config-1] type=libvirt hypervisor_id=hostname owner=Default_Organization env=Library server=hypervisor1.example.com username=virt_who_user encrypted_password=USDcr_password rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_1 rhsm_encrypted_password=USDuser_password rhsm_prefix=/rhsm",
"cat /etc/virt-who.d/virt-who-config-1.conf This configuration file is managed via the virt-who configure plugin manual edits will be deleted. [virt-who-config-1] type=libvirt hypervisor_id=hostname owner=ORG env=Library server=qemu:///system <==== username=virt-who-user encrypted_password=xxxxxxxxxxx rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_1 rhsm_encrypted_password=yyyyyyyyyyy rhsm_prefix=/rhsm",
"type=libvirt hypervisor_id=hostname owner=gss env=Library server=qemu+ssh://[email protected]/system username=root encrypted_password=33di3ksskd rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_2 rhsm_encrypted_password=23233dj3j3k rhsm_prefix=/rhsm",
"type=esx hypervisor_id=hostname owner=gss env=Library server=vcenter.example.com [email protected] encrypted_password=33di3ksskd rhsm_hostname=satellite.example.com rhsm_username=virt_who_reporter_2 rhsm_encrypted_password=23233dj3j3k rhsm_prefix=/rhsm"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_virtual_machine_subscriptions_in_red_hat_satellite/introduction |
1.8.2. Three-Tier LVS Topology | 1.8.2. Three-Tier LVS Topology Figure 1.22, "Three-Tier LVS Topology" shows a typical three-tier LVS configuration. In the example, the active LVS router routes the requests from the public network (Internet) to the second tier - real servers. Each real server then accesses a shared data source of a Red Hat cluster in the third tier over the private network. Figure 1.22. Three-Tier LVS Topology This topology is suited well for busy FTP servers, where accessible data is stored on a central, highly available server and accessed by each real server via an exported NFS directory or Samba share. This topology is also recommended for websites that access a central, high-availability database for transactions. Additionally, using an active-active configuration with a Red Hat cluster, you can configure one high-availability cluster to serve both of these roles simultaneously. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-lvs-cm-CSO |
Chapter 24. OperatorPKI [network.operator.openshift.io/v1] | Chapter 24. OperatorPKI [network.operator.openshift.io/v1] Description OperatorPKI is a simple certificate authority. It is not intended for external use - rather, it is internal to the network operator. The CNO creates a CA and a certificate signed by that CA. The certificate has both ClientAuth and ServerAuth extended usages enabled. A Secret called <name>-ca with two data keys: tls.key - the private key tls.crt - the CA certificate A ConfigMap called <name>-ca with a single data key: cabundle.crt - the CA certificate(s) A Secret called <name>-cert with two data keys: tls.key - the private key tls.crt - the certificate, signed by the CA The CA certificate will have a validity of 10 years, rotated after 9. The target certificate will have a validity of 6 months, rotated after 3 The CA certificate will have a CommonName of "<namespace>_<name>-ca@<timestamp>", where <timestamp> is the last rotation time. Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorPKISpec is the PKI configuration. status object OperatorPKIStatus is not implemented. 24.1.1. .spec Description OperatorPKISpec is the PKI configuration. Type object Required targetCert Property Type Description targetCert object targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled 24.1.2. .spec.targetCert Description targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled Type object Required commonName Property Type Description commonName string commonName is the value in the certificate's CN 24.1.3. .status Description OperatorPKIStatus is not implemented. Type object 24.2. API endpoints The following API endpoints are available: /apis/network.operator.openshift.io/v1/operatorpkis GET : list objects of kind OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis DELETE : delete collection of OperatorPKI GET : list objects of kind OperatorPKI POST : create an OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} DELETE : delete an OperatorPKI GET : read the specified OperatorPKI PATCH : partially update the specified OperatorPKI PUT : replace the specified OperatorPKI 24.2.1. /apis/network.operator.openshift.io/v1/operatorpkis Table 24.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OperatorPKI Table 24.2. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty 24.2.2. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis Table 24.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 24.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OperatorPKI Table 24.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorPKI Table 24.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.8. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorPKI Table 24.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.10. Body parameters Parameter Type Description body OperatorPKI schema Table 24.11. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 202 - Accepted OperatorPKI schema 401 - Unauthorized Empty 24.2.3. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} Table 24.12. Global path parameters Parameter Type Description name string name of the OperatorPKI namespace string object name and auth scope, such as for teams and projects Table 24.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OperatorPKI Table 24.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 24.15. Body parameters Parameter Type Description body DeleteOptions schema Table 24.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorPKI Table 24.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 24.18. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorPKI Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 24.20. Body parameters Parameter Type Description body Patch schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorPKI Table 24.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.23. Body parameters Parameter Type Description body OperatorPKI schema Table 24.24. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 401 - Unauthorized Empty | [
"More specifically, given an OperatorPKI with <name>, the CNO will manage:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/operatorpki-network-operator-openshift-io-v1 |
Appendix A. Component Versions | Appendix A. Component Versions This appendix is a list of components and their versions in the Red Hat Enterprise Linux 6.7 release. Table A.1. Component Versions Component Version Kernel 2.6.32-573 QLogic qla2xxx driver 8.07.00.16.06.7-k QLogic ql2xxx firmware ql2100-firmware-1.19.38-3.1 ql2200-firmware-2.02.08-3.1 ql23xx-firmware-3.03.27-3.1 ql2400-firmware-7.03.00-1 ql2500-firmware-7.03.00-1 Emulex lpfc driver 10.6.0.20 iSCSI initiator utils iscsi-initiator-utils-6.2.0.873-14 DM-Multipath device-mapper-multipath-libs-0.4.9-87 LVM lvm2-2.02.118-2 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_release_notes/component_versions |
6.7. Configuring Fencing for Cluster Members | 6.7. Configuring Fencing for Cluster Members Once you have completed the initial steps of creating a cluster and creating fence devices, you need to configure fencing for the cluster nodes. To configure fencing for the nodes after creating a new cluster and configuring the fencing devices for the cluster, follow the steps in this section. Note that you must configure fencing for each node in the cluster. Note It is recommended that you configure multiple fencing mechanisms for each node. A fencing device can fail due to network split, a power outage, or a problem in the fencing device itself. Configuring multiple fencing mechanisms can reduce the likelihood that the failure of a fencing device will have fatal results. This section documents the following procedures: Section 6.7.1, "Configuring a Single Power-Based Fence Device for a Node" Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" Section 6.7.3, "Configuring a Backup Fence Device" Section 6.7.4, "Configuring a Node with Redundant Power" Section 6.7.6, "Removing Fence Methods and Fence Instances" 6.7.1. Configuring a Single Power-Based Fence Device for a Node Use the following procedure to configure a node with a single power-based fence device. The fence device is named my_apc , which uses the fence_apc fencing agent. In this example, the device named my_apc was previously configured with the --addfencedev option, as described in Section 6.5, "Configuring Fence Devices" . Add a fence method for the node, providing a name for the fence method. For example, to configure a fence method named APC for the node node-01.example.com in the configuration file on the cluster node node-01.example.com , execute the following command: Add a fence instance for the method. You must specify the fence device to use for the node, the node this instance applies to, the name of the method, and any options for this method that are specific to this node: For example, to configure a fence instance in the configuration file on the cluster node node-01.example.com that uses power port 1 on the APC switch for the fence device named my_apc to fence cluster node node-01.example.com using the method named APC , execute the following command: You will need to add a fence method for each node in the cluster. The following commands configure a fence method for each node with the method name APC . The device for the fence method specifies my_apc as the device name, which is a device previously configured with the --addfencedev option, as described in Section 6.5, "Configuring Fence Devices" . Each node is configured with a unique APC switch power port number: The port number for node-01.example.com is 1 , the port number for node-02.example.com is 2 , and the port number for node-03.example.com is 3 . Example 6.2, " cluster.conf After Adding Power-Based Fence Methods " shows a cluster.conf configuration file after you have added these fencing methods and instances to each node in the cluster. Example 6.2. cluster.conf After Adding Power-Based Fence Methods Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . | [
"ccs -h host --addmethod method node",
"ccs -h node01.example.com --addmethod APC node01.example.com",
"ccs -h host --addfenceinst fencedevicename node method [ options ]",
"ccs -h node01.example.com --addfenceinst my_apc node01.example.com APC port=1",
"ccs -h node01.example.com --addmethod APC node01.example.com ccs -h node01.example.com --addmethod APC node02.example.com ccs -h node01.example.com --addmethod APC node03.example.com ccs -h node01.example.com --addfenceinst my_apc node01.example.com APC port=1 ccs -h node01.example.com --addfenceinst my_apc node02.example.com APC port=2 ccs -h node01.example.com --addfenceinst my_apc node03.example.com APC port=3",
"<cluster name=\"mycluster\" config_version=\"3\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"APC\"> <device name=\"my_apc\" port=\"1\"/> </method> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"APC\"> <device name=\"my_apc\" port=\"2\"/> </method> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"APC\"> <device name=\"my_apc\" port=\"3\"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"my_apc\" passwd=\"password_example\"/> </fencedevices> <rm> </rm> </cluster>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-member-ccs-CA |
Chapter 10. SELinux systemd Access Control | Chapter 10. SELinux systemd Access Control In Red Hat Enterprise Linux 7, system services are controlled by the systemd daemon. In releases of Red Hat Enterprise Linux, daemons could be started in two ways: At boot time, the System V init daemon launched an init.rc script and then this script launched the required daemon. For example, the Apache server, which was started at boot, got the following SELinux label: An administrator launched the init.rc script manually, causing the daemon to run. For example, when the service httpd restart command was invoked on the Apache server, the resulting SELinux label looked as follows: When launched manually, the process adopted the user portion of the SELinux label that started it, making the labeling in the two scenarios above inconsistent. With the systemd daemon, the transitions are very different. As systemd handles all the calls to start and stop daemons on the system, using the init_t type, it can override the user part of the label when a daemon is restarted manually. As a result, the labels in both scenarios above are system_u:system_r:httpd_t:s0 as expected and the SELinux policy could be improved to govern which domains are able to control which units. 10.1. SELinux Access Permissions for Services In versions of Red Hat Enterprise Linux, an administrator was able to control, which users or applications were able to start or stop services based on the label of the System V Init script. Now, systemd starts and stops all services, and users and processes communicate with systemd using the systemctl utility. The systemd daemon has the ability to consult the SELinux policy and check the label of the calling process and the label of the unit file that the caller tries to manage, and then ask SELinux whether or not the caller is allowed the access. This approach strengthens access control to critical system capabilities, which include starting and stopping system services. For example, previously, administrators had to allow NetworkManager to execute systemctl to send a D-Bus message to systemd , which would in turn start or stop whatever service NetworkManager requested. In fact, NetworkManager was allowed to do everything systemctl could do. It was also impossible to setup confined administrators so that they could start or stop just particular services. To fix these issues, systemd also works as an SELinux Access Manager. It can retrieve the label of the process running systemctl or the process that sent a D-Bus message to systemd . The daemon then looks up the label of the unit file that the process wanted to configure. Finally, systemd can retrieve information from the kernel if the SELinux policy allows the specific access between the process label and the unit file label. This means a compromised application that needs to interact with systemd for a specific service can now be confined by SELinux. Policy writers can also use these fine-grained controls to confine administrators. Policy changes involve a new class called service , with the following permissions: For example, a policy writer can now allow a domain to get the status of a service or start and stop a service, but not enable or disable a service. Access control operations in SELinux and systemd do not match in all cases. A mapping was defined to line up systemd method calls with SELinux access checks. Table 10.1, "Mapping of systemd unit file method calls on SELinux access checks" maps access checks on unit files while Table 10.2, "Mapping of systemd general system calls on SELinux access checks" covers access checks for the system in general. If no match is found in either table, then the undefined system check is called. Table 10.1. Mapping of systemd unit file method calls on SELinux access checks systemd unit file method SELinux access check DisableUnitFiles disable EnableUnitFiles enable GetUnit status GetUnitByPID status GetUnitFileState status Kill stop KillUnit stop LinkUnitFiles enable ListUnits status LoadUnit status MaskUnitFiles disable PresetUnitFiles enable ReenableUnitFiles enable Reexecute start Reload reload ReloadOrRestart start ReloadOrRestartUnit start ReloadOrTryRestart start ReloadOrTryRestartUnit start ReloadUnit reload ResetFailed stop ResetFailedUnit stop Restart start RestartUnit start Start start StartUnit start StartUnitReplace start Stop stop StopUnit stop TryRestart start TryRestartUnit start UnmaskUnitFiles enable Table 10.2. Mapping of systemd general system calls on SELinux access checks systemd general system call SELinux access check ClearJobs reboot FlushDevices halt Get status GetAll status GetJob status GetSeat status GetSession status GetSessionByPID status GetUser status Halt halt Introspect status KExec reboot KillSession halt KillUser halt ListJobs status ListSeats status ListSessions status ListUsers status LockSession halt PowerOff halt Reboot reboot SetUserLinger halt TerminateSeat halt TerminateSession halt TerminateUser halt Example 10.1. SELinux Policy for a System Service By using the sesearch utility, you can list policy rules for a system service. For example, calling the sesearch -A -s NetworkManager_t -c service command returns: | [
"system_u:system_r:httpd_t:s0",
"unconfined_u:system_r:httpd_t:s0",
"class service { start stop status reload kill load enable disable }",
"allow NetworkManager_t dnsmasq_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t nscd_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t ntpd_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t pppd_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t polipo_unit_file_t : service { start stop status reload kill load } ;"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-Security-Enhanced_Linux-Systemd_Access_Control |
Chapter 7. MachineConfigPool [machineconfiguration.openshift.io/v1] | Chapter 7. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigPoolSpec is the spec for MachineConfigPool resource. status object MachineConfigPoolStatus is the status for MachineConfigPool resource. 7.1.1. .spec Description MachineConfigPoolSpec is the spec for MachineConfigPool resource. Type object Property Type Description configuration object The targeted MachineConfig object for the machine config pool. machineConfigSelector object machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. maxUnavailable integer-or-string maxUnavailable defines either an integer number or percentage of nodes in the pool that can go Unavailable during an update. This includes nodes Unavailable for any reason, including user initiated cordons, failing nodes, etc. The default value is 1. A value larger than 1 will mean multiple nodes going unavailable during the update, which may affect your workload stress on the remaining nodes. You cannot set this value to 0 to stop updates (it will default back to 1); to stop updates, use the 'paused' property instead. Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards, even if maxUnavailable is greater than one. nodeSelector object nodeSelector specifies a label selector for Machines paused boolean paused specifies whether or not changes to this machine config pool should be stopped. This includes generating new desiredMachineConfig and update of machines. 7.1.2. .spec.configuration Description The targeted MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.1.3. .spec.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 7.1.4. .spec.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.1.5. .spec.machineConfigSelector Description machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.6. .spec.machineConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.7. .spec.machineConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.8. .spec.nodeSelector Description nodeSelector specifies a label selector for Machines Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.9. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.10. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.11. .status Description MachineConfigPoolStatus is the status for MachineConfigPool resource. Type object Property Type Description certExpirys array certExpirys keeps track of important certificate expiration data certExpirys[] object ceryExpiry contains the bundle name and the expiry date conditions array conditions represents the latest available observations of current state. conditions[] object MachineConfigPoolCondition contains condition information for an MachineConfigPool. configuration object configuration represents the current MachineConfig object for the machine config pool. degradedMachineCount integer degradedMachineCount represents the total number of machines marked degraded (or unreconcilable). A node is marked degraded if applying a configuration failed.. machineCount integer machineCount represents the total number of machines in the machine config pool. observedGeneration integer observedGeneration represents the generation observed by the controller. readyMachineCount integer readyMachineCount represents the total number of ready machines targeted by the pool. unavailableMachineCount integer unavailableMachineCount represents the total number of unavailable (non-ready) machines targeted by the pool. A node is marked unavailable if it is in updating state or NodeReady condition is false. updatedMachineCount integer updatedMachineCount represents the total number of machines targeted by the pool that have the CurrentMachineConfig as their config. 7.1.12. .status.certExpirys Description certExpirys keeps track of important certificate expiration data Type array 7.1.13. .status.certExpirys[] Description ceryExpiry contains the bundle name and the expiry date Type object Required bundle subject Property Type Description bundle string bundle is the name of the bundle in which the subject certificate resides expiry string expiry is the date after which the certificate will no longer be valid subject string subject is the subject of the certificate 7.1.14. .status.conditions Description conditions represents the latest available observations of current state. Type array 7.1.15. .status.conditions[] Description MachineConfigPoolCondition contains condition information for an MachineConfigPool. Type object Property Type Description lastTransitionTime `` lastTransitionTime is the timestamp corresponding to the last status change of this condition. message string message is a human readable description of the details of the last transition, complementing reason. reason string reason is a brief machine readable explanation for the condition's last transition. status string status of the condition, one of ('True', 'False', 'Unknown'). type string type of the condition, currently ('Done', 'Updating', 'Failed'). 7.1.16. .status.configuration Description configuration represents the current MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.1.17. .status.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 7.1.18. .status.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 7.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigpools DELETE : delete collection of MachineConfigPool GET : list objects of kind MachineConfigPool POST : create a MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} DELETE : delete a MachineConfigPool GET : read the specified MachineConfigPool PATCH : partially update the specified MachineConfigPool PUT : replace the specified MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status GET : read status of the specified MachineConfigPool PATCH : partially update status of the specified MachineConfigPool PUT : replace status of the specified MachineConfigPool 7.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigpools HTTP method DELETE Description delete collection of MachineConfigPool Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfigPool Table 7.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPoolList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfigPool Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body MachineConfigPool schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 202 - Accepted MachineConfigPool schema 401 - Unauthorized Empty 7.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the MachineConfigPool HTTP method DELETE Description delete a MachineConfigPool Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfigPool Table 7.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfigPool Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfigPool Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body MachineConfigPool schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty 7.2.3. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status Table 7.15. Global path parameters Parameter Type Description name string name of the MachineConfigPool HTTP method GET Description read status of the specified MachineConfigPool Table 7.16. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfigPool Table 7.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfigPool Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body MachineConfigPool schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_apis/machineconfigpool-machineconfiguration-openshift-io-v1 |
Chapter 152. StrimziPodSetSpec schema reference | Chapter 152. StrimziPodSetSpec schema reference Used in: StrimziPodSet Property Property type Description selector LabelSelector Selector is a label query which matches all the pods managed by this StrimziPodSet . Only matchLabels is supported. If matchExpressions is set, it will be ignored. pods Map array The Pods managed by this StrimziPodSet. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-StrimziPodSetSpec-reference |
Chapter 2. Deploying Red Hat build of OpenJDK application in containers | Chapter 2. Deploying Red Hat build of OpenJDK application in containers You can deploy Red Hat build of OpenJDK applications in containers and have them run when the container is loaded. Procedure Copy the application JAR to the /deployments directory in the image JAR file. For example, the following shows a brief Dockerfile that adds an application called testubi.jar to the Red Hat build of OpenJDK 11 UBI8 image: | [
"FROM registry.access.redhat.com/ubi8/openjdk-11 COPY target/testubi.jar /deployments/testubi.jar"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/packaging_red_hat_build_of_openjdk_11_applications_in_containers/deploying-openjdk-apps-in-containers |
Chapter 3. The Ceph client components | Chapter 3. The Ceph client components Ceph clients differ materially in how they present data storage interfaces. A Ceph block device presents block storage that mounts just like a physical storage drive. A Ceph gateway presents an object storage service with S3-compliant and Swift-compliant RESTful interfaces with its own user management. However, all Ceph clients use the Reliable Autonomic Distributed Object Store (RADOS) protocol to interact with the Red Hat Ceph Storage cluster. They all have the same basic needs: The Ceph configuration file, and the Ceph monitor address. The pool name. The user name and the path to the secret key. Ceph clients tend to follow some similar patterns, such as object-watch-notify and striping. The following sections describe a little bit more about RADOS, librados and common patterns used in Ceph clients. Prerequisites A basic understanding of distributed storage systems. 3.1. Ceph client native protocol Modern applications need a simple object storage interface with asynchronous communication capability. The Ceph Storage Cluster provides a simple object storage interface with asynchronous communication capability. The interface provides direct, parallel access to objects throughout the cluster. Pool Operations Snapshots Read/Write Objects Create or Remove Entire Object or Byte Range Append or Truncate Create/Set/Get/Remove XATTRs Create/Set/Get/Remove Key/Value Pairs Compound operations and dual-ack semantics 3.2. Ceph client object watch and notify A Ceph client can register a persistent interest with an object and keep a session to the primary OSD open. The client can send a notification message and payload to all watchers and receive notification when the watchers receive the notification. This enables a client to use any object as a synchronization/communication channel. 3.3. Ceph client Mandatory Exclusive Locks Mandatory Exclusive Locks is a feature that locks an RBD to a single client, if multiple mounts are in place. This helps address the write conflict situation when multiple mounted clients try to write to the same object. This feature is built on object-watch-notify explained in the section. So, when writing, if one client first establishes an exclusive lock on an object, another mounted client will first check to see if a peer has placed a lock on the object before writing. With this feature enabled, only one client can modify an RBD device at a time, especially when changing internal RBD structures during operations like snapshot create/delete . It also provides some protection for failed clients. For instance, if a virtual machine seems to be unresponsive and you start a copy of it with the same disk elsewhere, the first one will be blacklisted in Ceph and unable to corrupt the new one. Mandatory Exclusive Locks are not enabled by default. You have to explicitly enable it with --image-feature parameter when creating an image. Example Here, the numeral 5 is a summation of 1 and 4 where 1 enables layering support and 4 enables exclusive locking support. So, the above command will create a 100 GB rbd image, enable layering and exclusive lock. Mandatory Exclusive Locks is also a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. Mandatory Exclusive Locks also does some ground work for mirroring. 3.4. Ceph client object map Object map is a feature that tracks the presence of backing RADOS objects when a client writes to an rbd image. When a write occurs, that write is translated to an offset within a backing RADOS object. When the object map feature is enabled, the presence of these RADOS objects is tracked. So, we can know if the objects actually exist. Object map is kept in-memory on the librbd client so it can avoid querying the OSDs for objects that it knows don't exist. In other words, object map is an index of the objects that actually exist. Object map is beneficial for certain operations, viz: Resize Export Copy Flatten Delete Read A shrink resize operation is like a partial delete where the trailing objects are deleted. An export operation knows which objects are to be requested from RADOS. A copy operation knows which objects exist and need to be copied. It does not have to iterate over potentially hundreds and thousands of possible objects. A flatten operation performs a copy-up for all parent objects to the clone so that the clone can be detached from the parent i.e, the reference from the child clone to the parent snapshot can be removed. So, instead of all potential objects, copy-up is done only for the objects that exist. A delete operation deletes only the objects that exist in the image. A read operation skips the read for objects it knows doesn't exist. So, for operations like resize, shrinking only, exporting, copying, flattening, and deleting, these operations would need to issue an operation for all potentially affected RADOS objects, whether they exist or not. With object map enabled, if the object doesn't exist, the operation need not be issued. For example, if we have a 1 TB sparse RBD image, it can have hundreds and thousands of backing RADOS objects. A delete operation without object map enabled would need to issue a remove object operation for each potential object in the image. But if object map is enabled, it only needs to issue remove object operations for the objects that exist. Object map is valuable against clones that don't have actual objects but get objects from parents. When there is a cloned image, the clone initially has no objects and all reads are redirected to the parent. So, object map can improve reads as without the object map, first it needs to issue a read operation to the OSD for the clone, when that fails, it issues another read to the parent - with object map enabled. It skips the read for objects it knows doesn't exist. Object map is not enabled by default. You have to explicitly enable it with --image-features parameter when creating an image. Also, Mandatory Exclusive Locks is a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. To enable object map support when creating a image, execute: Here, the numeral 13 is a summation of 1 , 4 and 8 where 1 enables layering support, 4 enables exclusive locking support and 8 enables object map support. So, the above command will create a 100 GB rbd image, enable layering, exclusive lock and object map. 3.5. Ceph client data stripping Storage devices have throughput limitations, which impact performance and scalability. So storage systems often support striping- storing sequential pieces of information across multiple storage devices- to increase throughput and performance. The most common form of data striping comes from RAID. The RAID type most similar to Ceph's striping is RAID 0, or a 'striped volume.' Ceph's striping offers the throughput of RAID 0 striping, the reliability of n-way RAID mirroring and faster recovery. Ceph provides three types of clients: Ceph Block Device, Ceph Filesystem, and Ceph Object Storage. A Ceph Client converts its data from the representation format it provides to its users, such as a block device image, RESTful objects, CephFS filesystem directories, into objects for storage in the Ceph Storage Cluster. Tip The objects Ceph stores in the Ceph Storage Cluster are not striped. Ceph Object Storage, Ceph Block Device, and the Ceph Filesystem stripe their data over multiple Ceph Storage Cluster objects. Ceph Clients that write directly to the Ceph storage cluster using librados must perform the striping, and parallel I/O for themselves to obtain these benefits. The simplest Ceph striping format involves a stripe count of 1 object. Ceph Clients write stripe units to a Ceph Storage Cluster object until the object is at its maximum capacity, and then create another object for additional stripes of data. The simplest form of striping may be sufficient for small block device images, S3 or Swift objects. However, this simple form doesn't take maximum advantage of Ceph's ability to distribute data across placement groups, and consequently doesn't improve performance very much. The following diagram depicts the simplest form of striping: If you anticipate large images sizes, large S3 or Swift objects for example, video, you may see considerable read/write performance improvements by striping client data over multiple objects within an object set. Significant write performance occurs when the client writes the stripe units to their corresponding objects in parallel. Since objects get mapped to different placement groups and further mapped to different OSDs, each write occurs in parallel at the maximum write speed. A write to a single disk would be limited by the head movement for example, 6ms per seek and bandwidth of that one device for example, 100MB/s. By spreading that write over multiple objects, which map to different placement groups and OSDs, Ceph can reduce the number of seeks per drive and combine the throughput of multiple drives to achieve much faster write or read speeds. Note Striping is independent of object replicas. Since CRUSH replicates objects across OSDs, stripes get replicated automatically. In the following diagram, client data gets striped across an object set ( object set 1 in the following diagram) consisting of 4 objects, where the first stripe unit is stripe unit 0 in object 0 , and the fourth stripe unit is stripe unit 3 in object 3 . After writing the fourth stripe, the client determines if the object set is full. If the object set is not full, the client begins writing a stripe to the first object again, see object 0 in the following diagram. If the object set is full, the client creates a new object set, see object set 2 in the following diagram, and begins writing to the first stripe, with a stripe unit of 16, in the first object in the new object set, see object 4 in the diagram below. Three important variables determine how Ceph stripes data: Object Size: Objects in the Ceph Storage Cluster have a maximum configurable size, such as 2 MB, or 4 MB. The object size should be large enough to accommodate many stripe units, and should be a multiple of the stripe unit. Important Red Hat recommends a safe maximum value of 16 MB. Stripe Width: Stripes have a configurable unit size, for example 64 KB. The Ceph Client divides the data it will write to objects into equally sized stripe units, except for the last stripe unit. A stripe width should be a fraction of the Object Size so that an object may contain many stripe units. Stripe Count: The Ceph Client writes a sequence of stripe units over a series of objects determined by the stripe count. The series of objects is called an object set. After the Ceph Client writes to the last object in the object set, it returns to the first object in the object set. Important Test the performance of your striping configuration before putting your cluster into production. You CANNOT change these striping parameters after you stripe the data and write it to objects. Once the Ceph Client has striped data to stripe units and mapped the stripe units to objects, Ceph's CRUSH algorithm maps the objects to placement groups, and the placement groups to Ceph OSD Daemons before the objects are stored as files on a storage disk. Note Since a client writes to a single pool, all data striped into objects get mapped to placement groups in the same pool. So they use the same CRUSH map and the same access controls. 3.6. Ceph on-wire encryption You can enable encryption for all Ceph traffic over the network with the messenger version 2 protocol. The secure mode setting for messenger v2 encrypts communication between Ceph daemons and Ceph clients, giving you end-to-end encryption. The second version of Ceph's on-wire protocol, msgr2 , includes several new features: A secure mode encrypting all data moving through the network. Encapsulation improvement of authentication payloads. Improvements to feature advertisement and negotiation. The Ceph daemons bind to multiple ports allowing both the legacy, v1-compatible, and the new, v2-compatible, Ceph clients to connect to the same storage cluster. Ceph clients or other Ceph daemons connecting to the Ceph Monitor daemon will try to use the v2 protocol first, if possible, but if not, then the legacy v1 protocol will be used. By default, both messenger protocols, v1 and v2 , are enabled. The new v2 port is 3300, and the legacy v1 port is 6789, by default. The messenger v2 protocol has two configuration options that control whether the v1 or the v2 protocol is used: ms_bind_msgr1 - This option controls whether a daemon binds to a port speaking the v1 protocol; it is true by default. ms_bind_msgr2 - This option controls whether a daemon binds to a port speaking the v2 protocol; it is true by default. Similarly, two options control based on IPv4 and IPv6 addresses used: ms_bind_ipv4 - This option controls whether a daemon binds to an IPv4 address; it is true by default. ms_bind_ipv6 - This option controls whether a daemon binds to an IPv6 address; it is true by default. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx . Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx . Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . Ensure that you consider cluster CPU requirements when you plan the Red Hat Ceph Storage cluster, to include encryption overhead. Important Using secure mode is currently supported by Ceph kernel clients, such as CephFS and krbd on Red Hat Enterprise Linux. Using secure mode is supported by Ceph clients using librbd , such as OpenStack Nova, Glance, and Cinder. Address Changes For both versions of the messenger protocol to coexist in the same storage cluster, the address formatting has changed: Old address format was, IP_ADDR : PORT / CLIENT_ID , for example, 1.2.3.4:5678/91011 . New address format is, PROTOCOL_VERSION : IP_ADDR : PORT / CLIENT_ID , for example, v2:1.2.3.4:5678/91011 , where PROTOCOL_VERSION can be either v1 or v2 . Because the Ceph daemons now bind to multiple ports, the daemons display multiple addresses instead of a single address. Here is an example from a dump of the monitor map: Also, the mon_host configuration option and specifying addresses on the command line, using -m , supports the new address format. Connection Phases There are four phases for making an encrypted connection: Banner On connection, both the client and the server send a banner. Currently, the Ceph banner is ceph 0 0n . Authentication Exchange All data, sent or received, is contained in a frame for the duration of the connection. The server decides if authentication has completed, and what the connection mode will be. The frame format is fixed, and can be in three different forms depending on the authentication flags being used. Message Flow Handshake Exchange The peers identify each other and establish a session. The client sends the first message, and the server will reply with the same message. The server can close connections if the client talks to the wrong daemon. For new sessions, the client and server proceed to exchanging messages. Client cookies are used to identify a session, and can reconnect to an existing session. Message Exchange The client and server start exchanging messages, until the connection is closed. Additional Resources See the Red Hat Ceph Storage Data Security and Hardening Guide for details on enabling the msgr2 protocol. | [
"rbd create --size 102400 mypool/myimage --image-feature 5",
"rbd -p mypool create myimage --size 102400 --image-features 13",
"epoch 1 fsid 50fcf227-be32-4bcb-8b41-34ca8370bd17 last_changed 2021-12-12 11:10:46.700821 created 2021-12-12 11:10:46.700821 min_mon_release 14 (nautilus) 0: [v2:10.0.0.10:3300/0,v1:10.0.0.10:6789/0] mon.a 1: [v2:10.0.0.11:3300/0,v1:10.0.0.11:6789/0] mon.b 2: [v2:10.0.0.12:3300/0,v1:10.0.0.12:6789/0] mon.c"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/architecture_guide/the-ceph-client-components |
Chapter 10. Configuring locking and concurrency | Chapter 10. Configuring locking and concurrency Data Grid uses multi-versioned concurrency control (MVCC) to improve access to shared data. Allowing concurrent readers and writers Readers and writers do not block one another Write skews can be detected and handled Internal locks can be striped 10.1. Locking and concurrency Multi-versioned concurrency control (MVCC) is a concurrency scheme popular with relational databases and other data stores. MVCC offers many advantages over coarse-grained Java synchronization and even JDK Locks for access to shared data. Data Grid's MVCC implementation makes use of minimal locks and synchronizations, leaning heavily towards lock-free techniques such as compare-and-swap and lock-free data structures wherever possible, which helps optimize for multi-CPU and multi-core environments. In particular, Data Grid's MVCC implementation is heavily optimized for readers. Reader threads do not acquire explicit locks for entries, and instead directly read the entry in question. Writers, on the other hand, need to acquire a write lock. This ensures only one concurrent writer per entry, causing concurrent writers to queue up to change an entry. To allow concurrent reads, writers make a copy of the entry they intend to modify, by wrapping the entry in an MVCCEntry . This copy isolates concurrent readers from seeing partially modified state. Once a write has completed, MVCCEntry.commit() will flush changes to the data container and subsequent readers will see the changes written. 10.1.1. Clustered caches and locks In Data Grid clusters, primary owner nodes are responsible for locking keys. For non-transactional caches, Data Grid forwards the write operation to the primary owner of the key so it can attempt to lock it. Data Grid either then forwards the write operation to the other owners or throws an exception if it cannot lock the key. Note If the operation is conditional and fails on the primary owner, Data Grid does not forward it to the other owners. For transactional caches, primary owners can lock keys with optimistic and pessimistic locking modes. Data Grid also supports different isolation levels to control concurrent reads between transactions. 10.1.2. The LockManager The LockManager is a component that is responsible for locking an entry for writing. The LockManager makes use of a LockContainer to locate/hold/create locks. LockContainers come in two broad flavours, with support for lock striping and with support for one lock per entry. 10.1.3. Lock striping Lock striping entails the use of a fixed-size, shared collection of locks for the entire cache, with locks being allocated to entries based on the entry's key's hash code. Similar to the way the JDK's ConcurrentHashMap allocates locks, this allows for a highly scalable, fixed-overhead locking mechanism in exchange for potentially unrelated entries being blocked by the same lock. The alternative is to disable lock striping - which would mean a new lock is created per entry. This approach may give you greater concurrent throughput, but it will be at the cost of additional memory usage, garbage collection churn, etc. Default lock striping settings lock striping is disabled by default, due to potential deadlocks that can happen if locks for different keys end up in the same lock stripe. The size of the shared lock collection used by lock striping can be tuned using the concurrencyLevel attribute of the <locking /> configuration element. Configuration example: <locking striping="false|true"/> Or new ConfigurationBuilder().locking().useLockStriping(false|true); 10.1.4. Concurrency levels In addition to determining the size of the striped lock container, this concurrency level is also used to tune any JDK ConcurrentHashMap based collections where related, such as internal to DataContainer s. Please refer to the JDK ConcurrentHashMap Javadocs for a detailed discussion of concurrency levels, as this parameter is used in exactly the same way in Data Grid. Configuration example: <locking concurrency-level="32"/> Or new ConfigurationBuilder().locking().concurrencyLevel(32); 10.1.5. Lock timeout The lock timeout specifies the amount of time, in milliseconds, to wait for a contented lock. Configuration example: <locking acquire-timeout="10000"/> Or new ConfigurationBuilder().locking().lockAcquisitionTimeout(10000); //alternatively new ConfigurationBuilder().locking().lockAcquisitionTimeout(10, TimeUnit.SECONDS); 10.1.6. Consistency The fact that a single owner is locked (as opposed to all owners being locked) does not break the following consistency guarantee: if key K is hashed to nodes {A, B} and transaction TX1 acquires a lock for K , let's say on A . If another transaction, TX2 , is started on B (or any other node) and TX2 tries to lock K then it will fail with a timeout as the lock is already held by TX1 . The reason for this is the that the lock for a key K is always, deterministically, acquired on the same node of the cluster, regardless of where the transaction originates. 10.1.7. Data Versioning Data Grid supports two forms of data versioning: simple and external. The simple versioning is used in transactional caches for write skew check. The external versioning is used to encapsulate an external source of data versioning within Data Grid, such as when using Data Grid with Hibernate which in turn gets its data version information directly from a database. In this scheme, a mechanism to pass in the version becomes necessary, and overloaded versions of put() and putForExternalRead() will be provided in AdvancedCache to take in an external data version. This is then stored on the InvocationContext and applied to the entry at commit time. Note Write skew checks cannot and will not be performed in the case of external data versioning. | [
"<locking striping=\"false|true\"/>",
"new ConfigurationBuilder().locking().useLockStriping(false|true);",
"<locking concurrency-level=\"32\"/>",
"new ConfigurationBuilder().locking().concurrencyLevel(32);",
"<locking acquire-timeout=\"10000\"/>",
"new ConfigurationBuilder().locking().lockAcquisitionTimeout(10000); //alternatively new ConfigurationBuilder().locking().lockAcquisitionTimeout(10, TimeUnit.SECONDS);"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/configuring_data_grid_caches/locking |
Chapter 9. Installation configuration parameters for vSphere | Chapter 9. Installation configuration parameters for vSphere Before you deploy an OpenShift Container Platform cluster on vSphere, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 9.1. Available installation configuration parameters for vSphere The following tables specify the required, optional, and vSphere-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. Note On VMware vSphere, dual-stack networking can specify either IPv4 or IPv6 as the primary address family. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 9.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 9.4. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Virtual IP (VIP) addresses that you configured for control plane API access. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. An array of failure domain configuration objects. The name of the failure domain. String If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the openshift-region tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String The path to the vSphere compute cluster. String Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the vcenters field. String Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String Specifies the absolute path to a pre-existing Red Hat Enterprise Linux CoreOS (RHCOS) image template or virtual machine. The installation program can use the image template or virtual machine to quickly install RHCOS on vSphere hosts. Consider using this parameter as an alternative to uploading an RHCOS image on vSphere hosts. This parameter is available for use only on installer-provisioned infrastructure. String Virtual IP (VIP) addresses that you configured for cluster Ingress. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 9.1.5. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter: Table 9.5. Deprecated VMware vSphere cluster parameters Parameter Description Values The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Virtual IP (VIP) addresses that you configured for cluster Ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String 9.1.6. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 9.6. Optional VMware vSphere machine pool parameters Parameter Description Values The location from which the installation program downloads the Red Hat Enterprise Linux CoreOS (RHCOS) image. Before setting a path value for this parameter, ensure that the default RHCOS boot image in the OpenShift Container Platform release matches the RHCOS image template or virtual machine version; otherwise, cluster installation might fail. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . The size of the disk in gigabytes. Integer The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer The size of a virtual machine's memory in megabytes. Integer | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"platform: vsphere:",
"platform: vsphere: apiVIPs:",
"platform: vsphere: diskType:",
"platform: vsphere: failureDomains:",
"platform: vsphere: failureDomains: name:",
"platform: vsphere: failureDomains: region:",
"platform: vsphere: failureDomains: server:",
"platform: vsphere: failureDomains: zone:",
"platform: vsphere: failureDomains: topology: computeCluster:",
"platform: vsphere: failureDomains: topology: datacenter:",
"platform: vsphere: failureDomains: topology: datastore:",
"platform: vsphere: failureDomains: topology: folder:",
"platform: vsphere: failureDomains: topology: networks:",
"platform: vsphere: failureDomains: topology: resourcePool:",
"platform: vsphere: failureDomains: topology template:",
"platform: vsphere: ingressVIPs:",
"platform: vsphere: vcenters:",
"platform: vsphere: vcenters: datacenters:",
"platform: vsphere: vcenters: password:",
"platform: vsphere: vcenters: port:",
"platform: vsphere: vcenters: server:",
"platform: vsphere: vcenters: user:",
"platform: vsphere: apiVIP:",
"platform: vsphere: cluster:",
"platform: vsphere: datacenter:",
"platform: vsphere: defaultDatastore:",
"platform: vsphere: folder:",
"platform: vsphere: ingressVIP:",
"platform: vsphere: network:",
"platform: vsphere: password:",
"platform: vsphere: resourcePool:",
"platform: vsphere: username:",
"platform: vsphere: vCenter:",
"platform: vsphere: clusterOSImage:",
"platform: vsphere: osDisk: diskSizeGB:",
"platform: vsphere: cpus:",
"platform: vsphere: coresPerSocket:",
"platform: vsphere: memoryMB:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_vsphere/installation-config-parameters-vsphere |
Chapter 57. Additional resources | Chapter 57. Additional resources Designing and building cases for case management Getting started with case management | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/additional_resources_3 |
Chapter 12. Configuring OpenShift connection timeout | Chapter 12. Configuring OpenShift connection timeout By default, the OpenShift route is configured to time out HTTP requests that are longer than 30 seconds. This may cause session timeout issues in Business Central resulting in the following behaviors: "Unable to complete your request. The following exception occurred: (TypeError) : Cannot read property 'indexOf' of null." "Unable to complete your request. The following exception occurred: (TypeError) : b is null." A blank page is displayed when clicking the Project or Server links in Business Central. All Business Central templates already include extended timeout configuration. To configure longer timeout on Business Central OpenShift routes, add the haproxy.router.openshift.io/timeout: 60s annotation on the target route: - kind: Route apiVersion: v1 id: "USDAPPLICATION_NAME-rhpamcentr-http" metadata: name: "USDAPPLICATION_NAME-rhpamcentr" labels: application: "USDAPPLICATION_NAME" annotations: description: Route for Business Central's http service. haproxy.router.openshift.io/timeout: 60s spec: host: "USDBUSINESS_CENTRAL_HOSTNAME_HTTP" to: name: "USDAPPLICATION_NAME-rhpamcentr" For a full list of global route-specific timeout annotations, see the OpenShift Documentation . | [
"- kind: Route apiVersion: v1 id: \"USDAPPLICATION_NAME-rhpamcentr-http\" metadata: name: \"USDAPPLICATION_NAME-rhpamcentr\" labels: application: \"USDAPPLICATION_NAME\" annotations: description: Route for Business Central's http service. haproxy.router.openshift.io/timeout: 60s spec: host: \"USDBUSINESS_CENTRAL_HOSTNAME_HTTP\" to: name: \"USDAPPLICATION_NAME-rhpamcentr\""
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/configuring-openshift-connection-timeout-proc |
Deploying AMQ Broker on OpenShift | Deploying AMQ Broker on OpenShift Red Hat AMQ 2020.Q4 For Use with AMQ Broker 7.8 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_broker_on_openshift/index |
4.119. kernel | 4.119. kernel 4.119.1. RHSA-2013:1026 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6.2 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. These packages contain the Linux kernel , the core of any Linux operating system. Security Fixes CVE-2013-1773 , Important A buffer overflow flaw was found in the way UTF-8 characters were converted to UTF-16 in the utf8s_to_utf16s() function of the Linux kernel's FAT file system implementation. A local user able to mount a FAT file system with the "utf8=1" option could use this flaw to crash the system or, potentially, to escalate their privileges. CVE-2012-1796 , Important A flaw was found in the way KVM (Kernel-based Virtual Machine) handled guest time updates when the buffer the guest registered by writing to the MSR_KVM_SYSTEM_TIME machine state register (MSR) crossed a page boundary. A privileged guest user could use this flaw to crash the host or, potentially, escalate their privileges, allowing them to execute arbitrary code at the host kernel level. CVE-2013-1797 , Important A potential use-after-free flaw was found in the way KVM handled guest time updates when the GPA (guest physical address) the guest registered by writing to the MSR_KVM_SYSTEM_TIME machine state register (MSR) fell into a movable or removable memory region of the hosting user-space process (by default, QEMU-KVM) on the host. If that memory region is deregistered from KVM using KVM_SET_USER_MEMORY_REGION and the allocated virtual memory reused, a privileged guest user could potentially use this flaw to escalate their privileges on the host. CVE-2012-1798 , Important A flaw was found in the way KVM emulated IOAPIC (I/O Advanced Programmable Interrupt Controller). A missing validation check in the ioapic_read_indirect() function could allow a privileged guest user to crash the host, or read a substantial portion of host kernel memory. CVE-2012-1848 , Low A format string flaw was found in the ext3_msg() function in the Linux kernel's ext3 file system implementation. A local user who is able to mount an ext3 file system could use this flaw to cause a denial of service or, potentially, escalate their privileges. Red Hat would like to thank Andrew Honig of Google for reporting CVE-2013-1796, CVE-2013-1797, and CVE-2013-1798. Bug Fixes BZ# 956294 The virtual file system (VFS) code had a race condition between the unlink and link system calls that allowed creating hard links to deleted (unlinked) files. This could, under certain circumstances, cause inode corruption that eventually resulted in a file system shutdown. The problem was observed in Red Hat Storage during rsync operations on replicated Gluster volumes that resulted in an XFS shutdown. A testing condition has been added to the VFS code, preventing hard links to deleted files from being created. BZ# 972578 Various race conditions that led to indefinite log reservation hangs due to xfsaild "idle" mode occurred in the XFS file system. This could lead to certain tasks being unresponsive; for example, the cp utility could become unresponsive on heavy workload. This update improves the Active Item List (AIL) pushing logic in xfsaild. Also, the log reservation algorithm and interactions with xfsaild have been improved. As a result, the aforementioned problems no longer occur in this scenario. BZ# 972597 When the Active Item List (AIL) becomes empty, the xfsaild daemon is moved to a task sleep state that depends on the timeout value returned by the xfsaild_push() function. The latest changes modified xfsaild_push() to return a 10-ms value when the AIL is empty, which sets xfsaild into the uninterruptible sleep state (D state) and artificially increased system load average. This update applies a patch that fixes this problem by setting the timeout value to the allowed maximum, 50 ms. This moves xfsaild to the interruptible sleep state (S state), avoiding the impact on load average. BZ# 972607 When adding a virtual PCI device, such as virtio disk, virtio net, e1000 or rtl8139, to a KVM guest, the kacpid thread reprograms the hot plug parameters of all devices on the PCI bus to which the new device is being added. When reprogramming the hot plug parameters of a VGA or QXL graphics device, the graphics device emulation requests flushing of the guest's shadow page tables. Previously, if the guest had a huge and complex set of shadow page tables, the flushing operation took a significant amount of time and the guest could appear to be unresponsive for several minutes. This resulted in exceeding the threshold of the "soft lockup" watchdog and the "BUG: soft lockup" events were logged by both, the guest and host kernel. This update applies a series of patches that deal with this problem. The KVM's Memory Management Unit (MMU) now avoids creating multiple page table roots in connection with processors that support Extended Page Tables (EPT). This prevents the guest's shadow page tables from becoming too complex on machines with EPT support. MMU now also flushes only large memory mappings, which alleviates the situation on machines where the processor does not support EPT. Additionally, a free memory accounting race that could prevent KVM MMU from freeing memory pages has been fixed. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 4.119.2. RHSA-2013:0741 - Important: kernel security and bug fix update Updated kernel packages that fix several security issues and several bugs are now available for Red Hat Enterprise Linux 6.2 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. These packages contain the Linux kernel . Security Fixes CVE-2013-0871 , Important A race condition was found in the way the Linux kernel's ptrace implementation handled PTRACE_SETREGS requests when the debuggee was woken due to a SIGKILL signal instead of being stopped. A local, unprivileged user could use this flaw to escalate their privileges. CVE-2012-2133 , Moderate A use-after-free flaw was found in the Linux kernel's memory management subsystem in the way quota handling for huge pages was performed. A local, unprivileged user could use this flaw to cause a denial of service or, potentially, escalate their privileges. Red Hat would like to thank Shachar Raindel for reporting CVE-2012-2133. Bug Fixes BZ# 911265 The Intel 5520 and 5500 chipsets do not properly handle remapping of MSI and MSI-X interrupts. If the interrupt remapping feature is enabled on the system with such a chipset, various problems and service disruption could occur (for example, a NIC could stop receiving frames), and the "kernel: do_IRQ: 7.71 No irq handler for vector (irq -1)" error message appears in the system logs. As a workaround to this problem, it has been recommended to disable the interrupt remapping feature in the BIOS on such systems, and many vendors have updated their BIOS to disable interrupt remapping by default. However, the problem is still being reported by users without proper BIOS level with this feature properly turned off. Therefore, this update modifies the kernel to check if the interrupt remapping feature is enabled on these systems and to provide users with a warning message advising them on turning off the feature and updating the BIOS. BZ# 913161 A possible race between the n_tty_read() and reset_buffer_flags() functions could result in a NULL pointer dereference in the n_tty_read() function under certain circumstances. As a consequence, a kernel panic could have been triggered when interrupting a current task on a serial console. This update modifies the tty driver to use a spin lock to prevent functions from a parallel access to variables. A NULL pointer dereference causing a kernel panic can no longer occur in this scenario. BZ# 915581 Previously, running commands such as "ls", "find" or "move" on a MultiVersion File System (MVFS) could cause a kernel panic. This happened because the d_validate() function, which is used for dentry validation, called the kmem_ptr_validate() function to validate a pointer to a parent dentry. The pointer could have been freed anytime so the kmem_ptr_validate() function could not guarantee the pointer to be dereferenced, which could lead to a NULL pointer derefence. This update modifies d_validate() to verify the parent-child relationship by traversing the parent dentry's list of child dentries, which solves this problem. The kernel no longer panics in the described scenario. BZ# 921959 When running a high thread workload of small-sized files on an XFS file system, sometimes, the system could become unresponsive or a kernel panic could occur. This occurred because the xfsaild daemon had a subtle code path that led to lock recursion on the xfsaild lock when a buffer in the AIL was already locked and an attempt was made to force the log to unlock it. This patch removes the dangerous code path and queues the log force to be invoked from a safe locking context with respect to xfsaild. This patch also fixes the race condition between buffer locking and buffer pinned state that exposed the original problem by rechecking the state of the buffer after a lock failure. The system no longer hangs and kernel no longer panics in this scenario. BZ# 922140 A race condition could occur between page table sharing and virtual memory area (VMA) teardown. As a consequence, multiple "bad pmd" message warnings were displayed and "kernel BUG at mm/filemap.c:129" was reported while shutting down applications that share memory segments backed by huge pages. With this update, the VM_MAYSHARE flag is explicitly cleaned during the unmap_hugepage_range() call under the i_mmap_lock. This makes VMA ineligible for sharing and avoids the race condition. After using shared segments backed by huge pages, applications like databases and caches shut down correctly, with no crash. BZ# 923849 Previously, the NFS Lock Manager (NLM) did not resend blocking lock requests after NFSv3 server reboot recovery. As a consequence, when an application was running on a NFSv3 mount and requested a blocking lock, the application received an -ENOLCK error. This patch ensures that NLM always resend blocking lock requests after the grace period has expired. BZ# 924836 A bug in the anon_vma lock in the mprotect() function could cause virtual memory area (vma) corruption. The bug has been fixed so that virtual memory area corruption no longer occurs in this scenario. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 4.119.3. RHBA-2012:1254 - kernel bug fix and enhancement update Updated kernel packages that fix three bugs and add two enhancements are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fixes BZ# 846831 Previously, the TCP socket bound to NFS server contained a stale skb_hints socket buffer. Consequently, kernel could terminate unexpectedly. A patch has been provided to address this issue and skb_hints is now properly cleared from the socket, thus preventing this bug. BZ# 847041 On Intel systems with Pause Loop Exiting (PLE), or AMD systems with Pause Filtering (PF), it was possible for larger multi-CPU KVM guests to experience slowdowns and soft lock-ups. Due to a boundary condition in kvm_vcpu_on_spin, all the VCPUs could try to yield to VCPU0, causing contention on the run queue lock of the physical CPU where the guest's VCPU0 is running. This update eliminates the boundary condition in kvm_vcpu_on_spin. BZ# 847944 Due to a missing return statement, the nfs_attr_use_mounted_on_file() function returned a wrong value. As a consequence, redundant ESTALE errors could potentially be returned. This update adds the proper return statement to nfs_attr_use_mounted_on_file(), thus preventing this bug. Enhancements BZ# 847732 This update adds support for the Proportional Rate Reduction (PRR) algorithms for the TCP protocol. This algorithm determines TCP's sending rate in fast recovery. PRR avoids excessive window reductions and improves accuracy of the amount of data sent during loss recovery. In addition, a number of other enhancements and bug fixes for TCP are part of this update. BZ# 849550 This update affects performance of the O_DSYNC flag on the GFS2 file system when only data (and not metadata such as file size) has been dirtied as a result of the write() system call. Prior to this update, write calls with O_DSYNC were behaving the same way as with O_SYNC at all times. With this update, O_DSYNC write calls only write back data if the inode's metadata is not dirty. This results in a considerable performance improvement for this specific case. Note that the issue does not affect data integrity. The same issue also applies to the pairing of the write() and fdatasync() system calls. All users are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. The system must be rebooted for this update to take effect. 4.119.4. RHBA-2012:1198 - kernel bug fix update Updated kernel packages that fix two bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, the core of any Linux operating system. When an NTP server asserts the STA_INS flag (Leap Second Insert), the kernel starts an hrtimer (high-resolution timer) with a countdown clock. This hrtimer expires at end of the current month, midnight UTC, and inserts a second into the kernel timekeeping structures. A scheduled leap second occurred on June 30 2012 midnight UTC. Bug Fixes BZ# 840949 Previously in the kernel, when the leap second hrtimer was started, it was possible that the kernel livelocked on the xtime_lock variable. This update fixes the problem by using a mixture of separate subsystem locks (timekeeping and ntp) and removing the xtime_lock variable, thus avoiding the livelock scenarios that could occur in the kernel. BZ# 847365 After the leap second was inserted, applications calling system calls that used futexes consumed almost 100% of available CPU time. This occurred because the kernel's timekeeping structure update did not properly update these futexes. The futexes repeatedly expired, re-armed, and then expired immediately again. This update fixes the problem by properly updating the futex expiration times by calling the clock_was_set_delayed() function, an interrupt-safe method of the clock_was_set() function. All users are advised to upgrade to these updated packages, which fix these bugs. The system must be rebooted for this update to take effect. 4.119.5. RHBA-2013:0184 - kernel bug fix update Updated kernel packages that fix three bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fixes BZ# 880083 Previously, the IP over Infiniband (IPoIB) driver maintained state information about neighbors on the network by attaching it to the core network's neighbor structure. However, due to a race condition between the freeing of the core network neighbor struct and the freeing of the IPoIB network struct, a use after free condition could happen, resulting in either a kernel oops or 4 or 8 bytes of kernel memory being zeroed when it was not supposed to be. These patches decouple the IPoIB neighbor struct from the core networking stack's neighbor struct so that there is no race between the freeing of one and the freeing of the other. BZ# 884421 Previously, the HP Smart Array, or hpsa, driver used target reset. However, HP Smart Array logical drives do not support target reset. Therefore, if the target reset failed, the logical drive was taken offline with a file system error. The hpsa driver has been updated to use LUN reset instead of target reset, which is supported by these drives. BZ# 891563 Previously, the xdr routines in NFS version 2 and 3 conditionally updated the res->count variable. Read retry attempts after a short NFS read() call could fail to update the res->count variable, resulting in truncated read data being returned. With this update, the res->count variable is updated unconditionally, thus preventing this bug. Users should upgrade to these updated packages, which contain backported patches to fix these bugs. The system must be rebooted for this update to take effect. 4.119.6. RHSA-2011:1530 - Moderate: Red Hat Enterprise Linux 6.2 kernel security, bug fix, and enhancement update Updated kernel packages that fix multiple security issues, address several hundred bugs, and add numerous enhancements are now available as part of the ongoing support and maintenance of Red Hat Enterprise Linux version 6. This is the second regular update. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2011-1020 , Moderate The proc file system could allow a local, unprivileged user to obtain sensitive information or possibly cause integrity issues. CVE-2011-3347 , Moderate Non-member VLAN (virtual LAN) packet handling for interfaces in promiscuous mode and also using the be2net driver could allow an attacker on the local network to cause a denial of service. CVE-2011-3638 , Moderate A flaw was found in the Linux kernel in the way splitting two extents in ext4_ext_convert_to_initialized() worked. A local, unprivileged user with access to mount and unmount ext4 file systems could use this flaw to cause a denial of service. CVE-2011-4110 , Moderate A NULL pointer dereference flaw was found in the way the Linux kernel's key management facility handled user-defined key types. A local, unprivileged user could use the keyctl utility to cause a denial of service. Red Hat would like to thank Kees Cook for reporting CVE-2011-1020 ; Somnath Kotur for reporting CVE-2011-3347 ; and Zheng Liu for reporting CVE-2011-3638 . Bug Fixes BZ# 713682 When a host was in recovery mode and a SCSI scan operation was initiated, the scan operation failed and provided no error output. This bug has been fixed and the SCSI layer now waits for recovery of the host to complete scan operations for devices. BZ# 712139 In a GFS2 file system, when the responsibility for deallocation was passed from one node to another, the receiving node may not have had a fully up-to-date inode state. If the sending node has changed the important parts of the state in the mean time (block allocation/deallocation) then this resulted in triggering an assert during the deallocation on the receiving node. With this update, the inode state is refreshed correctly during deallocation on the receiving node, ensuring that deallocation proceeds normally. BZ# 712131 Issues for which a host had older hypervisor code running on newer hardware, which exposed the new CPU features to the guests, were discovered. This was dangerous because newer guest kernels (such as Red Hat Enterprise Linux 6) may have attempted to use those features or assume certain machine behaviors that it would not be able to process because it was, in fact, a Xen guest. One such place was the intel_idle driver which attempts to use the MWAIT and MONITOR instructions. These instructions are invalid operations for a Xen PV guest. This update provides a patch, which masks the MWAIT instruction to avoid this issue. BZ# 712102 The 128-bit multiply operation in the pvclock.h function was missing an output constraint for EDX which caused a register corruption to appear. As a result, Red Hat Enterprise Linux 3.8 and Red Hat Enterprise Linux 3.9 KVM guests with a Red Hat Enterprise Linux 6.1 KVM host kernel exhibited time inconsistencies. With this update, the underlying source code has been modified to address this issue, and time runs as expected on the aforementioned systems. BZ# 712000 Prior to this update, the following message appeared in kernel log files: The above message appeared on bnx2x interfaces in the multi-function mode which were not used and had no link, thus, not indicating any actual problems with connectivity. With this update, the message has been removed and no longer appears in kernel log files. BZ# 713730 Previously, some enclosure devices with a broken firmware reported incorrect values. As a consequence, kernel sometimes terminated unexpectedly. A patch has been provided to address this issue, and the kernel crashes no longer occur even if an enclosure device reports incorrect or duplicate data. BZ# 709856 Xen guests cannot make use of all CPU features, and in some cases they are even risky to be advertised. One such feature is CONSTANT_TSC. This feature prevents the TSC (Time Stamp Counter) from being marked as unstable, which allows the sched_clock_stable option to be enabled. Having the sched_clock_stable option enabled is problematic for Xen PV guests because the sched_clock() function has been overridden with the xen_sched_clock() function, which is not synchronized between virtual CPUs. This update provides a patch, which sets all x86_power features to 0 as a preventive measure against other potentially dangerous assumptions the kernel could make based on the features, fixing this issue. BZ# 623712 RHEL6.2 backported the scalability improvement on creating many 'cpu' control groups (cgroups) on a system with a large number of CPUs. The creation process for large number of cgroups will no longer hog the machine when the control groups feature is enabled. In addition to the scalability improvement, a /proc tunable parameter, dd sysctl_sched_shares_window, has been added, and the default is set to 10 ms. BZ# 719304 Older versions of be2net cards firmware may not recognize certain commands and return illegal/unsupported errors, causing confusing error messages to appear in the logs. With this update, the driver handles these errors gracefully and does not log them. BZ# 722461 On IBM System z, if a Linux instance with large amounts of anonymous memory runs into a memory shortage the first time, all pages on the active or inactive lists are considered referenced. This causes the memory management on IBM System z to do a full check over all page cache pages and start writeback for all of them. As a consequence, the system became temporarily unresponsive when the described situation occurred. With this update, only pages with active mappers are checked and the page scan now does not cause the hangs. BZ# 722596 This update fixes the inability of the be2net driver to work in a kdump environment. It clears an interrupt bit (in the card) that may be set while the driver is probed by the kdump kernel after a crash. BZ# 705441 A previously introduced update intended to prevent IOMMU (I/O Memory Management Unit) domain exhaustion introduced two regressions. The first regression was a race where a domain pointer could be freed while a lazy flush algorithm still had a reference to it, eventually causing kernel panic. The second regression was an erroneous reference removal for identity mapped and VM IOMMU domains, causing I/O errors. Both of these regressions could only be triggered on Intel based platforms, supporting VT-d, booted with the intel_iommu=on boot option. With this update, the underlying source code of the intel-iommu driver has been modified to resolve both of these problems. A forced flush is now used to avoid the lazy use after free issue, and extra checks have been added to avoid the erroneous reference removal. BZ# 635596 This update fixes two bugs related to Rx checksum offloading. These bugs caused a data corruption transferred over r8169 NIC when Rx checksum offloading was enabled. BZ# 704401 Prior to this update, kdump failed to create a vmcore file after triggering a crash on POWER7 systems with Dynamic DMA Windows enabled. This update provides a number of fixes that address this issue. BZ# 703935 Previously, auditing system calls used a simple check to determine whether a return value was positive or negative, which also determined the success of the system call. With an exception of few, this worked on most platforms and with most system calls. For example, the 32 bit mmap system call on the AMD64 architecture could return a pointer which appeared to be of value negative even though pointers are normally of unsigned values. This resulted in the success field being incorrect. This patch fixes the success field for all system calls on all architectures. BZ# 703245 When VLANs stacked on top of multiqueue devices passed through these devices, the queue_mapping value was not properly decremented because the VLAN devices called the physical devices via the ndo_select_queue method. This update removes the multiqueue functionality, resolving this issue. BZ# 703055 Prior to this update, Red Hat Enterprise Linux Xen (up to version 5.6) did not hide 1 GB pages and RDTSCP (enumeration features of CPUID), causing guest soft lock ups on AMD hosts when the guest's memory was greater than 8 GB. With this update, a Red Hat Enterprise Linux 6 HVM (Hardware Virtual Machine) guest is able to run on Red Hat Enterprise Linux Xen 5.6 and lower. BZ# 702742 Prior to this update, code was missing from the netif_set_real_num_tx_queues() function which prevented an increment of the real number of TX queues (the real_num_tx_queues value). This update adds the missing code; thus, resolving this issue. BZ# 725711 Previously, the inet6_sk_generic() function was using the obj_size variable to compute the address of its inner structure, causing memory corruption. With this update, the sk_alloc_size() is called every time there is a request for allocation, and memory corruption no longer occurs. BZ# 702057 Multiple GFS2 nodes attempted to unlink, rename, or manipulate files at the same time, causing various forms of file system corruption, panics, and withdraws. This update adds multiple checks for dinode's i_nlink value to assure inode operations such as link, unlink, or rename no longer cause the aforementioned problems. BZ# 701951 A kernel panic in the mpt2sas driver could occur on an IBM system using a drive with SMART (Self-Monitoring, Analysis and Reporting Technology) issues. This was because the driver was sending an SEP request while the kernel was in the interrupt context, causing the driver to enter the sleep state. With this update, a fake event is not executed from the interrupt context, assuring the SEP request is properly issued. BZ# 700538 When using certain SELinux policies, such as the MLS policy, it was not possible to properly mount the cgroupfs file system due to the way security checks were applied to the new cgroupfs inodes during the mount operation. With this update, the security checks applied during the mount operation have been changed so that they always succeed, and the cgroupfs file system can now be successfully mounted and used with the MLS SELinux policy. This issue did not affect systems which used the default targeted policy. BZ# 729220 When a SCTP (Stream Control Transmission Protocol) packet contained two COOKIE_ECHO chunks and nothing else, the SCTP state machine disabled output processing for the socket while processing the first COOKIE_ECHO chunk, then lost the association and forgot to re-enable output processing for the socket. As a consequence, any data which needed to be sent to a peer were blocked and the socket appeared to be unresponsive. With this update, a new SCTP command has been added to the kernel code, which sets the association explicitly; the command is used when processing the second COOKIE_ECHO chunk to restore the context for SCTP state machine, thus fixing this bug. BZ# 698268 The hpsa driver has been updated to provide a fix for hpsa driver kdump failures. BZ# 696777 Prior to this update, interrupts were enabled before the dispatch log for the boot CPU was set up, causing kernel panic if a timer interrupt occurred before the log was set up. This update adds a check to the scan_dispatch_log function to ensure the dispatch log has been allocated. BZ# 696754 Prior to this update, the interrupt service routine was performing unnecessary MMIO operation during performance testing on IBM POWER7 machines. With this update, the logic of the routine has been modified so that there are fewer MMIO operations in the performance path of the code. Additionally, as a result of the aforementioned change, an existing condition was exposed where the IPR driver (the controller device driver) could return an unexpected HRRQ (Host Receive Request) interrupt. The original code flagged the interrupt as unexpected and then reset the adapter. After further analysis, it was confirmed that this condition could occasionally occur and the interrupt can be safely ignored. Additional code provided by this update detects this condition, clears the interrupt, and allows the driver to continue without resetting the adapter. BZ# 732706 The ACPI (Advanced Control and Power Interface) core places all events to the kacpi_notify queue including PCI hotplug events. When the acpiphp driver was loaded and a PCI card with a PCI-to-PCI bridge was removed from the system, the code path attempted to empty the kacpi_notify queue which causes a deadlock, and the kacpi_notify thread became unresponsive. With this update, the call sequence has been fixed, and the bridge is now cleaned-up properly in the described scenario. BZ# 669363 Prior to this update, the /proc/diskstats file showed erroneous values. This occurred when the kernel merged two I/O operations for adjacent sectors which were located on different disk partitions. Two merge requests were submitted for the adjacent sectors, the first request for the second partition and the second request for the first partition, which was then merged to the first request. The first submission of the merge request incremented the in_flight value for the second partition. However, at the completion of the merge request, the in_flight value of a different partition (the first one) was decremented. This resulted in the erroneous values displayed in the /proc/diskstats file. With this update, the merging of two I/O operations which are located on different disk partitions has been fixed and works as expected. BZ# 670765 Due to an uninitialized variable (specifically, the isr_ack variable), a virtual guest could become unresponsive when migrated while being rebooted. With this update, the said variable is properly initialized, and virtual guests no longer hang in the aforementioned scenario. BZ# 695231 Prior to this update, the be2net driver was using the BE3 chipset in legacy mode. This update enables this chipset to work in a native mode, making it possible to use all 4 ports on a 4-port integrated NIC. BZ# 694747 A Windows Server 2008 32-bit guest installation failed on a Red Hat Enterprise Linux 6.1 Snap2 KVM host when allocating more than one virtual CPU (vcpus > 1) during the installation. As soon the installation started after booting from ISO, a blue screen with the following error occurred: This was because a valid microcode update signature was not reported to the guest. This update fixes this issue by reporting a non-zero microcode update signature to the guest. BZ# 679526 Disk read operations on a memory constrained system could cause allocations to stall. As a result, the system performance would drop considerably. With this update, latencies seen in page reclaim operations have been reduced and their efficiency improved; thus, fixing this issue. BZ# 736667 A workaround to the megaraid_sas driver was provided to address an issue but as a side effect of the workaround, megaraid_sas stopped to report certain enclosures, CD-ROM drives, and other devices. The underlying problem for the issue has been fixed as reported in BZ#741166. With this update, the original workaround has been reverted, and megaraid_sas now reports many different devices as before. BZ# 694210 This update fixes a regression in which a client would use an UNCHECKED NFS CREATE call when an open system call was attempted with the O_EXCL|O_CREAT flag combination. An EXCLUSIVE NFS CREATE call should have been used instead to ensure that O_EXCL semantics were preserved. As a result, an application could be led to believe that it had created the file when it was in fact created by another application. BZ# 692167 A race between the FSFREEZE ioctl() command to freeze an ext4 file system and mmap I/O operations would result in a deadlock if these two operations ran simultaneously. This update provides a number of patches to address this issue, and a deadlock no longer occurs in the previously-described scenario. BZ# 712653 When a CPU is about to modify data protected by the RCU (Read Copy Update) mechanism, it has to wait for other CPUs in the system to pass a quiescent state. Previously, the guest mode was not considered a quiescent state. As a consequence, if a CPU was in the guest mode for a long time, another CPU had to wait a long time in order to modify RCU-protected data. With this update, the rcu_virt_note_context_switch() function, which marks the guest mode as a quiescent state, has been added to the kernel, thus resolving this issue. BZ# 683658 The patch that fixed BZ#556572 introduced a bug where the page lock was being released too soon, allowing the do_wp_page function to reuse the wrprotected page before PageKsm would be set in page->mapping. With this update, a new version of the original fix was introduced, thus fixing this issue. BZ# 738110 Due to the partial support of IPv6 multicast snooping, IPv6 multicast packets may have been dropped. This update fixes IPv6 multicast snooping so that packets are no longer dropped. BZ# 691310 While executing a multi-threaded process by multiple CPUs, page-directory-pointer-table entry (PDPTE) registers were not fully flushed from the CPU cache when a Page Global Directory (PGD) entry was changed in x86 Physical Address Extension (PAE) mode. As a consequence, the process failed to respond for a long time before it successfully finished. With this update, the kernel has been modified to flush the Translation Lookaside Buffer (TLB) for each CPU using a page table that has changed. Multi-threaded processes now finish without hanging. BZ# 738379 When a kernel NFS server was being stopped, kernel sometimes terminated unexpectedly. A bug has been fixed in the wait_for_completion_interruptible_timeout() function and the crashes no longer occur in the described scenario. BZ# 690745 Recent Red Hat Enterprise Linux 6 releases use a new naming scheme for network interfaces on some machines. As a result, the installer may use different names during an upgrade in certain scenarios (typically em1 is used instead of eth0 on new Dell machines). However, the previously used network interface names are preserved on the system and the upgraded system will still use the previously used interfaces. This is not the case for Yum upgrades. BZ# 740465 A scenario for this bug involves two hosts, configured to use IPv4 network, and two guests, configured to use IPv6 network. When a guest on host A attempted to send a large UDP datagram to host B, host A terminated unexpectedly. With this update, the ipv6_select_ident() function has been fixed to accept the in6_addr parameter and to use the destination address in IPv6 header when no route is attached, and the crashes no longer occur in the described scenario. BZ# 693894 Migration of a Windows XP virtual guest during the early stage of a boot caused the virtual guest OS to fail to boot correctly. With this update, the underlying source code has been modified to address this issue, and the virtual guest OS no longer fails to boot. BZ# 694358 This update adds a missing patch to the ixgbe driver to use the kernel's generic routine to set and obtain the DCB (Data Center Bridging) priority. Without this fix, applications could not properly query the DCB priority. BZ# 679262 In Red Hat Enterprise Linux 6.2, due to security concerns, addresses in /proc/kallsyms and /proc/modules show all zeros when accessed by a non-root user. BZ# 695859 Red Hat Enterprise Linux 6.0 and 6.1 defaulted to running UEFI systems in a physical addressing mode. Red Hat Enterprise Linux 6.2 defaults to running UEFI systems in a virtual addressing mode. The behavior may be obtained by passing the physefi kernel parameter. BZ# 695966 After receiving an ABTS response, the FCoE (Fibre Channel over Ethernet) DDP error status was cleared. As a result, the FCoE DDP context invalidation was incorrectly bypassed and caused memory corruption. With this update, the underlying source code has been modified to address this issue, and memory corruption no longer occurs. BZ# 696511 Suspending a system to RAM and consequently resuming it caused USB3.0 ports to not work properly. This was because a USB3.0 device configured for MSIX would, during the resume operation, incorrectly read its interrupt state. This would lead it to fall back to a legacy mode and appear unresponsive. With this update, the interrupt state is cached, allowing the driver to properly resume its state. BZ# 662666 Deleting the lost+found directory on a file system with inodes of size greater than 128 bytes and reusing inode 11 for a different file caused the extended attributes for inode 11 (which were set before a umount operation) to not be saved after a file system remount. As a result, the extended attributes were lost after the remount. With this update, inodes store their extended attributes under all circumstances. BZ# 698023 Prior to this update, in the __cache_alloc() function, the ac variable could be changed after cache_alloc_refill() and the following kmemleak_erase() function could receive an incorrect pointer, causing kernel panic. With this update, the ac variable is updated after the cache_alloc_refill() unconditionally. BZ# 698625 This update includes two fixes for the bna driver, specifically: A memory leak was caused by an unintentional assignment of the NULL value to the RX path destroy callback function pointer after a correct initialization. During a kernel crash, the bna driver control path state machine and firmware did not receive a notification of the crash, and, as a result, were not shut down cleanly. BZ# 700165 When an event caused the ibmvscsi driver to reset its CRQ, re-registering the CRQ returned H_CLOSED, indicating that the Virtual I/O Server was not ready to receive commands. As a consequence, the ibmvscsi driver offlined the adapter and did not recover. With this update, the interrupt is re-enabled after the reset so that when the Virtual I/O server is ready and sends a CRQ init, it is able to receive it and resume initialization of the VSCSI adapter. BZ# 700299 This update standardizes the printed format of UUIDs (Universally Unique Identifier)/GUIDs (Globally Unique Identifier) by using an additional extension to the %p format specifier (which is used to show the memory address value of a pointer). BZ# 702036 Prior to this update, the ehea driver caused a kernel oops during a memory hotplug if the ports were not up. With this update, the waitqueues are initialized during the port probe operation, instead of during the port open operation. BZ# 702263 While running gfs2_grow, the file system became unresponsive. This was due to the log not getting flushed when a node dropped its rindex glock so that another node could grow the file system. If the log did not get flushed, GFS2 could corrupt the sd_log_le_rg list, ultimately causing a hang. With this update, a log flush is forced when the rindex glock is invalidated; gfs2_grow completes as expected and the file system remains accessible. BZ# 703251 The Brocade BFA FC/FCoE driver was previously selectively marked as a Technology Preview based on the type of the adapter. With this update, the Brocade BFA FC/FCoE driver is always marked as a Technology Preview. BZ# 703265 The Brocade BFA FC SCSI driver (bfa driver) has been upgraded to version 2.3.2.4. Additionally, this update provides the following two fixes: A firmware download memory leak was caused by the release_firmware() function not being called after the request_firmware() function. Similarly, the firmware download interface has been fixed and now works as expected. During a kernel crash, the bfa I/O control state machine and firmware did not receive a notification of the crash, and, as a result, were not shut down cleanly. BZ# 704231 A previously released patch for BZ#625487 introduced a kABI (Kernel Application Binary Interface) workaround that extended struct sock (the network layer representation of sockets) by putting the extension structure in the memory right after the original structure. As a result, the prot->obj_size pointer had to be adjusted in the proto_register function. Prior to this update, the adjustment was done only if the alloc_slab parameter of the proto_register function was not 0. When the alloc_slab parameter was 0, drivers performed allocations themselves using sk_alloc and as the allocated memory was lower than needed, a memory corruption could occur. With this update, the underlying source code has been modified to address this issue, and a memory corruption no longer occurs. BZ# 705082 A scalability issue with KVM/QEMU was discovered in the idr_lock spinlock in the posix-timers code, resulting in excessive CPU resource usage. With this update, the underlying source code has been modified to address this issue, and the aforementioned spinlock no longer uses excessive amounts of CPU resources. BZ# 723650 When a NFS server returned more than two GETATTR bitmap words in response to the FATTR4_ACL attribute request, decoding operations of the nfs4_getfacl() function failed. A patch has been provided to address this issue and the ACLs are now returned in the described scenario. BZ# 707268 After hot plugging one of the disks of a non-boot 2-disk RAID1 pair, the md driver would enter an infinite resync loop thinking there was a spare disk available, when, in fact, there was none. This update adds an additional check to detect the previously mentioned situation; thus, fixing this issue. BZ# 707757 The default for CFQ's group_isolation variable has been changed from 0 to 1 (/sys/block/<device>/queue/iosched/group_isolation). After various testing and numerous user reports, it was found that having default 1 is more useful. When set to 0, all random I/O queues become part of the root cgroup and not the actual cgroup which the application is part of. Consequently, this leads to no service differentiation for applications. BZ# 691945 In error recovery, most SCSI error recovery stages send a TUR (Test Unit Ready) command for every bad command when a driver error handler reports success. When several bad commands pointed to a same device, the device was probed multiple times. When the device was in a state where the device did not respond to commands even after a recovery function returned success, the error handler had to wait for the commands to time out. This significantly impeded the recovery process. With this update, SCSI mid-layer error routines to send test commands have been fixed to respond once per device instead of once per bad command, thus reducing error recovery time considerably. BZ# 696396 Prior to this update, loading the FS-Cache kernel module would cause the kernel to be tainted as a Technology Preview via the mark_tech_preview() function, which would cause kernel lock debugging to be disabled by the add_taint() function. However, the NFS and CIFS modules depend on the FS-Cache module so using either NFS or CIFS would cause the FS-Cache module to be loaded and the kernel tainted. With this update, FS-Cache only taints the kernel when a cache is brought online (for instance by starting the cachefilesd service) and, additionally, the add_taint() function has been modified so that it does not disable lock debugging for informational-only taints. BZ# 703728 This update removes temporary and unneeded files that were previously included with the kernel source code. BZ# 632802 Previously removed flushing of MMU updates in the kmap_atomic() and kunmap_atomic() functions resulted in a dereference bug when processing a fork() under a heavy load. This update fixes page table entries in the kmap_atomic() and kunmap_atomic() functions to be synchronous, regardless of the lazy_mmu mode, thus fixing this issue. BZ# 746570 Previously fixed ABI issues in Red Hat Enterprise Linux 6.2 resulted in broken drivers that were built against the Red Hat Enterprise Linux 6.1 sources. This update adds padding to the net_device private structure so that the overruns resulting from an excessively-long pointer computed in the netdev_priv structure do not exceed the bounds of allocated memory. BZ# 737753 A previously introduced patch increased the value of the cpuid field from 8 to 16 bits. As a result, in some cases, modules built against the Red Hat Enterprise Linux 6.0 kernel source panicked when loaded into the new Red Hat Enterprise Linux 6.2 kernel. This update provides a patch which fixes this guaranteed backwards compatibility. BZ# 745253 KABI issues with additional fields in the "uv_blade_info" structure were discovered that prevented existing SGI modules from loading against the Red Hat Enterprise Linux 6.2 kernel. This update fixes the code in the "uv_blade_info" structure, and SGI modules load against the Red Hat Enterprise Linux 6.2 kernel as expected. BZ# 748503 Incorrect duplicate MAC addresses were being used on a rack network daughter card that contained a quad-port Intel I350 Gigabit Ethernet Controller. With this update, the underlying source code has been modified to address this issue, and correct MAC addresses are now used under all circumstances. BZ# 728676 Prior to this update, on certain HP systems, the hpsa and cciss drivers could become unresponsive and cause the system to crash when booting due to an attempt to read from a write-only register. This update fixes this issue, and the aforementioned crashes no longer occur. BZ# 693930 The cxgb4 driver never waited for RDMA_WR/FINI completions because the condition variable used to determine whether the completion happened was never reset, and this condition variable was reused for both connection setup and teardown. This caused various driver crashes under heavy loads because resources were released too early. With this update, atomic bits are used to correctly reset the condition immediately after the completion is detected. BZ# 710497 If a Virtual I/O server failed in a dual virtual I/O server multipath configuration, not all remote ports were deleted, causing path failover to not work properly. With this update, all remote ports are deleted so that path failover works as expected. For a single path configuration, the remote ports will enter the devloss state. BZ# 713868 When using the "crashkernel=auto" parameter and the "crashk_res.start" variable was set to 0, the existing logic automatically set the value of the "crashk_res.start" variable to 32M. However, to keep enough space in the RMO region for the first stage kernel on 64-bit PowerPC, the "crashk_res.start" should have been set to KDUMP_KERNELBASE (64M). This update fixes this issue and properly assigns the correct value to the "crashk_res.start" variable. BZ# 743959 Due to a delay in settling of the usb-storage driver, the kernel failed to report all the disk drive devices in time to Anaconda, when booted in Unified Extensible Firmware Interface (UEFI) mode. Consequently, Anaconda presumed that no driver disks were available and loaded the standard drivers. With this update, both Anaconda and the driver use a one second delay, all devices are enumerated and inspected for driver disks properly. BZ# 690129 Prior to this update, the remap_file_pages() call was disabled for mappings without the VM_CAN_NONLINEAR flag set. Shared mappings of temporary file storage facilities (tmpfs) had this flag set but the flag was not set for the shared mappings of the /dev/zero device or shared anonymous mappings. With this update, the code has been modified and the VM_CAN_NONLINEAR flag is set also on the shared mappings of the /dev/zero device and shared anonymous mappings. BZ# 694309 The NFS client iterates through individual elements of a vector and issues a write request for each element to the server when the writev() function is called on a file opened with the O_DIRECT flag. Consequently, the server commits each individual write to the disk before replying to the client and the request transfer for the NFS client to the NFS server causes performance problems. With this update, the larger I/Os from the client are submitted only if all buffers are page-aligned, each individual vector element is aligned and has multiple pages, and the total I/O size is less than wsize (write block size). BZ# 699042 Improper shutdown in the e1000e driver caused a client with Intel 82578DM Gigabit Ethernet PHY to ignore the Wake-on-LAN signal and attempt to boot the client failed. This update applies the upstream Intel patch which fixes the problem. BZ# 703357 The "ifconfig up" command allocates memory for Direct Memory Access (DMA) operations. The memory is released when the "ifconfig down" command is issued. Previously, if another "ifconfig up" command was issued after an ifconfig up/down session, it re-enabled the DMA operations before sending the new DMA memory address to the NIC and the NIC could access the DMA address allocated during the ifconfig up/down session. However, the DMA address was already freed and could be used by another process. With this update, the underlying code has been modified and the problem no longer occurs. BZ# 729737 The in-process I/O operations of the Chelsio iWARP (cxgb3) driver could attempt to access a control data structure, which was previously freed after a hardware error that disabled the offload functionality occurred. This caused the system to terminate unexpectedly. With this update, the driver delays the freeing of the data structure and the problem no longer occurs. BZ# 734509 Previously, the capabilities flag of the WHEA_OSC call was set to 0. This could cause certain machines to disable APEI (ACPI Platform Error Interface). The flag is now set to 1, which enables APEI and fixes the problem. BZ# 748441 Previously, the origin device was being read when overwriting a complete chunk in the snapshot. This led to a significant memory leak when using the dm-snapshot module. With this update, reading of the origin device is skipped, and the memory leak no longer occurs. BZ# 750208 When the user attempted to list the mounted GFS2 file systems, a kernel panic occurred. This happened if the file in the location which the user tried to list was at the same time being manipulated by using the "fallocate" command. With this update, page cache is no longer used; the block is zeroed out at allocation time instead. Now, a kernel panic no longer occurs. BZ# 749018 The queuecommand error-handling function could cause memory leaks or prevent the TUR command from finishing for SCSI device drivers that enabled the support for lockless dispatching (lockless=1). This happened because the device driver did not call the scsi_cmd_get_serial() function and the serial_number property of the command remained zero. Consequently, the SCSI command could not be finished or aborted as the error-handling function always returned success for "serial_number == 0". The check for the serial number has been removed and the SCSI command can be finished or aborted. BZ# 750583 A patch for the Ironlake graphics controller and memory controller hub (GMCH) with a workaround for Virtualization Technology for Directed I/O (VT-d) introduced recursive calls to the unmap() function. With this update, a flag, which prevents the recursion, was added to the call chain, which allows the called routines to prevent the recursion. Enhancements Note For more information on the most important of the RHEL 6.2 kernel enhancements, refer to the Red Hat Enterprise Linux 6.2 Release Notes . BZ# 707287 This update introduces a kernel module option that allows the disabling of the Flow Director. BZ# 706167 This update adds XTS (XEX-based Tweaked CodeBook) AES256 self-tests to meet the FIPS-140 requirements. BZ# 635968 This update introduces parallel port printer support for Red Hat Enterprise Linux 6. BZ# 699865 This update reduces the overhead of probes provided by kprobe (a dynamic instrumentation system), and enhances the performance of SystemTap. BZ# 696695 With this update, the JSM driver has been updated to support for enabling the Bell2 (with PLX chip) 2-port adapter on POWER7 systems. Additionally, EEH support has been added for to JSM driver. BZ# 669739 Memory limit for x86_64 domU PV guests has been increased to 128 GB: CONFIG_XEN_MAX_DOMAIN_MEMORY=128. BZ# 662208 In Red Hat Enterprise Linux 6.2, the taskstat utility (which prints ASET tasks status) in the kernel has been enhanced by the providing microsecond CPU time granularity to the top utility. BZ# 708365 Red Hat Enterprise Linux 6.2 introduced the multi-message send syscall, which is the send version of the existing recvmmsg syscall in Red Hat Enterprise Linux 6. The following is the syscall sendmmsg socket API: BZ# 647700 Red Hat Enterprise Linux 6.2's EDAC driver support for the latest Intel chipset is available as a Technical Preview. BZ# 599054 In Red Hat Enterprise Linux 6.2, the ipset feature in the kernel is added to store multiple IP addresses or port numbers, and match against the collection by iptables. Users should upgrade to these updated packages, which contain backported patches to correct these issues, fix these bugs, and add these enhancement. The system must be rebooted for this update to take effect. 4.119.7. RHSA-2011:1849 - Important: kernel security and bug fix update Updated kernel packages that fix one security issue and various bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fix CVE-2011-4127 , Important Using the SG_IO IOCTL to issue SCSI requests to partitions or LVM volumes resulted in the requests being passed to the underlying block device. If a privileged user only had access to a single partition or LVM volume, they could use this flaw to bypass those restrictions and gain read and write access (and be able to issue other SCSI commands) to the entire block device. In KVM (Kernel-based Virtual Machine) environments using raw format virtio disks backed by a partition or LVM volume, a privileged guest user could bypass intended restrictions and issue read and write requests (and other SCSI commands) on the host, and possibly access the data of other guests that reside on the same underlying block device. Partition-based and LVM-based storage pools are not used by default. Refer to Red Hat Bugzilla bug 752375 for further details and a mitigation script for users who cannot apply this update immediately. Bug Fixes BZ# 750459 Previously, idle load balancer kick requests from other CPUs could be serviced without first receiving an inter-processor interrupt (IPI). This could have led to a deadlock. BZ# 751403 This update fixes a performance regression that may have caused processes (including KVM guests) to hang for a number of seconds. BZ# 755545 When md_raid1_unplug_device() was called while holding a spinlock, under certain device failure conditions, it was possible for the lock to be requested again, deeper in the call chain, causing a deadlock. Now, md_raid1_unplug_device() is no longer called while holding a spinlock. BZ# 756426 In hpet_next_event(), an interrupt could have occurred between the read and write of the HPET (High Performance Event Timer) and the value of HPET_COUNTER was then beyond that being written to the comparator (HPET_Tn_CMP). Consequently, the timers were overdue for up to several minutes. Now, a comparison is performed between the value of the counter and the comparator in the HPET code. If the counter is beyond the comparator, the "-ETIME" error code is returned. BZ# 756427 Index allocation in the virtio-blk module was based on a monotonically increasing variable "index". Consequently, released indexes were not reused and after a period of time, no new were available. Now, virtio-blk uses the ida API to allocate indexes. BZ# 757671 A bug related to Context Caching existed in the Intel IOMMU support module. On some newer Intel systems, the Context Cache mode has changed from hardware versions, potentially exposing a Context coherency race. The bug was exposed when performing a series of hot plug and unplug operations of a Virtual Function network device which was immediately configured into the network stack, i.e., successfully performed dynamic host configuration protocol (DHCP). When the coherency race occurred, the assigned device would not work properly in the guest virtual machine. With this update, the Context coherency is corrected and the race and potentially resulting device assignment failure no longer occurs. BZ# 758028 The align_va_addr kernel parameter was ignored if secondary CPUs were initialized. This happened because the parameter settings were overridden during the initialization of secondary CPUs. Also, the align_va_addr parameter documentation contained incorrect parameter arguments. With this update, the underlying code has been modified to prevent the overriding and the documentation has been updated. This update also removes the unused code introduced by the patch for BZ# 739456 . BZ# 758513 Dell systems based on a future Intel processor with graphics acceleration required the selection of the install system with basic video driver installation option. This update removes this requirement. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 4.119.8. RHSA-2012:0052 - Important: kernel security and bug fix update Updated kernel packages that fix one security issue and various bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fix CVE-2012-0056 , Important It was found that permissions were not checked properly in the Linux kernel when handling the /proc/[pid]/mem writing functionality. A local, unprivileged user could use this flaw to escalate their privileges. Refer to Red Hat Knowledgebase article 69124 for further information. Red Hat would like to thank Juri Aedla for reporting this issue. Bug Fixes BZ# 768288 The RHSA-2011:1849 kernel update introduced a bug in the Linux kernel scheduler, causing a "WARNING: at kernel/sched.c:5915 thread_return" message and a call trace to be logged. This message was harmless, and was not due to any system malfunctions or adverse behavior. With this update, the WARN_ON_ONCE() call in the scheduler that caused this harmless message has been removed. BZ# 769595 The RHSA-2011:1530 kernel update introduced a regression in the way the Linux kernel maps ELF headers for kernel modules into kernel memory. If a third-party kernel module is compiled on a Red Hat Enterprise Linux system with a kernel prior to RHSA-2011:1530, then loading that module on a system with RHSA-2011:1530 kernel would result in corruption of one byte in the memory reserved for the module. In some cases, this could prevent the module from functioning correctly. 755867 On some SMP systems the tsc may erroneously be marked as unstable during early system boot or while the system is under heavy load. A "Clocksource tsc unstable" message was logged when this occurred. As a result the system would switch to the slower access, but higher precision HPET clock. The "tsc=reliable" kernel parameter is supposed to avoid this problem by indicating that the system has a known good clock, however, the parameter only affected run time checks. A fix has been put in to avoid the boot time checks so that the TSC remains as the clock for the duration of system runtime. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 4.119.9. RHSA-2012:0350 - Moderate: kernel security and bug fix update Updated kernel packages that fix several security issues and bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Security Fixes CVE-2011-4077 , Moderate A buffer overflow flaw was found in the way the Linux kernel's XFS file system implementation handled links with overly long path names. A local, unprivileged user could use this flaw to cause a denial of service or escalate their privileges by mounting a specially-crafted disk. CVE-2011-4081 , Moderate Flaws in ghash_update() and ghash_final() could allow a local, unprivileged user to cause a denial of service. CVE-2011-4132 , Moderate A flaw was found in the Linux kernel's Journaling Block Device (JBD). A local, unprivileged user could use this flaw to crash the system by mounting a specially-crafted ext3 or ext4 disk. CVE-2011-4347 , Moderate It was found that the kvm_vm_ioctl_assign_device() function in the KVM (Kernel-based Virtual Machine) subsystem of a Linux kernel did not check if the user requesting device assignment was privileged or not. A local, unprivileged user on the host could assign unused PCI devices, or even devices that were in use and whose resources were not properly claimed by the respective drivers, which could result in the host crashing. CVE-2011-4594 , Moderate Two flaws were found in the way the Linux kernel's __sys_sendmsg() function, when invoked via the sendmmsg() system call, accessed user-space memory. A local, unprivileged user could use these flaws to cause a denial of service. CVE-2011-4611 , Moderate The RHSA-2011:1530 kernel update introduced an integer overflow flaw in the Linux kernel. On PowerPC systems, a local, unprivileged user could use this flaw to cause a denial of service. CVE-2011-4622 , Moderate A flaw was found in the way the KVM subsystem of a Linux kernel handled PIT (Programmable Interval Timer) IRQs (interrupt requests) when there was no virtual interrupt controller set up. A local, unprivileged user on the host could force this situation to occur, resulting in the host crashing. CVE-2012-0038 , Moderate A flaw was found in the way the Linux kernel's XFS file system implementation handled on-disk Access Control Lists (ACLs). A local, unprivileged user could use this flaw to cause a denial of service or escalate their privileges by mounting a specially-crafted disk. CVE-2012-0045 , Moderate A flaw was found in the way the Linux kernel's KVM hypervisor implementation emulated the syscall instruction for 32-bit guests. An unprivileged guest user could trigger this flaw to crash the guest. CVE-2012-0207 , Moderate A divide-by-zero flaw was found in the Linux kernel's igmp_heard_query() function. An attacker able to send certain IGMP (Internet Group Management Protocol) packets to a target system could use this flaw to cause a denial of service. Red Hat would like to thank Nick Bowler for reporting CVE-2011-4081; Sasha Levin for reporting CVE-2011-4347; Tetsuo Handa for reporting CVE-2011-4594; Maynard Johnson for reporting CVE-2011-4611; Wang Xi for reporting CVE-2012-0038; Stephan Barwolf for reporting CVE-2012-0045; and Simon McVittie for reporting CVE-2012-0207. Upstream acknowledges Mathieu Desnoyers as the original reporter of CVE-2011-4594. Bug Fixes BZ# 789058 Windows clients never send write requests larger than 64 KB but the default size for write requests in Common Internet File System (CIFS) was set to a much larger value. Consequently, write requests larger than 64 KB caused various problems on certain third-party servers. This update lowers the default size for write requests to prevent this bug. The user can override this value to a larger one to get better performance. BZ# 788003 In certain circumstances, the qla2xxx driver was unable to discover fibre channel (FC) tape devices because the ADISC ELS request failed. This update adds the new module parameter, ql2xasynclogin, to address this issue. When this parameter is set to "0", FC tape devices are discovered properly. BZ# 787580 Socket callbacks use the svc_xprt_enqueue() function to add sockets to the pool->sp_sockets list. In normal operation, a server thread will later take the socket off that list. Previously, on the nfsd daemon shutdown, still-running svc_xprt_enqueue() could re-add an socket to the sp_sockets list just before it was deleted. Consequently, system could terminate unexpectedly by memory corruption in the sunrpc module. With this update, the XPT_BUSY flag is put on every socket and svc_xprt_enqueue() now checks this flag, thus preventing this bug. BZ# 787162 When trying to send a kdump file to a remote system via the tg3 driver, the tg3 NIC (network interface controller) could not establish the connection and the file could not be sent. The kdump kernel leaves the MSI-X interrupts enabled as set by the crashed kernel, however, the kdump kernel only enables one CPU and this could cause the interrupt delivery to the tg3 driver to fail. With this update, tg3 enables only a single MSI-X interrupt in the kdump kernel to match the overall environment, thus preventing this bug. BZ# 786022 Previously, the cfq_cic_link() function had a race condition. When some processes, which shared ioc issue I/O to the same block device simultaneously, cfq_cic_link() sometimes returned the -EEXIST error code. Consequently, one of the processes started to wait indefinitely. A patch has been provided to address this issue and the cfq_cic_lookup() call is now retried in the described scenario, thus fixing this bug. BZ# 783226 When transmitting a fragmented socket buffer (SKB), the qlge driver fills a descriptor with fragment addresses, after DMA-mapping them. On systems with pages larger than 8 KB and less than eight fragments per SKB, a macro defined the size of the OAL (Outbound Address List) list as 0. For SKBs with more than eight fragments, this would start overwriting the list of addresses already mapped and would make the driver fail to properly unmap the right addresses on architectures with pages larger than 8 KB. With this update, the size of external list for TX address descriptors have been fixed and qlge no longer fails in the described scenario. BZ# 781971 The time-out period in the qla2x00_fw_ready() function was hard-coded to 20 seconds. This period was too short for new QLogic host bus adapters (HBAs) for Fibre Channel over Ethernet (FCoE). Consequently, some logical unit numbers (LUNs) were missing after a reboot. With this update, the time-out period has been set to 60 seconds so that the modprobe utility is able to recheck the driver module, thus fixing this bug. BZ# 772687 Previously, the remove_from_page_cache() function was not exported. Consequently, the module for the Lustre file system did not work correctly. With this update, remove_from_page_cache() is properly exported, thus fixing this bug. BZ# 761536 Due to a regression, the updated vmxnet3 driver used the ndo_set_features() method instead of various methods of the ethtool utility. Consequently, it was not possible to make changes to vmxnet3-based network adapters in Red Hat Enterprise Linux 6.2. This update restores the ability of the driver to properly set features, such as csum or TSO (TCP Segmentation Offload), via ethtool. BZ# 771981 Due to regression, an attempt to open a directory that did not have a cached dentry failed and the EISDIR error code was returned. The same operation succeeded if a cached dentry existed. This update modifies the nfs_atomic_lookup() function to allow fallbacks to normal look-up in the described scenario. BZ# 768916 On a system with an idle network interface card (NIC) controlled by the e1000e driver, when the card transmitted up to four descriptors, which delayed the write-back and nothing else, the run of the watchdog driver about two seconds later forced a check for a transmit hang in the hardware, which found the old entry in the TX ring. Consequently, a false "Detected Hardware Unit Hang" message was issued to the log. With this update, when the hang is detected, the descriptor is flushed and the hang check is run again, which fixes this bug. BZ# 769208 The CFQ (Completely Fair Queuing) scheduler does idling on sequential processes. With changes to the IOeventFD feature, traffic pattern at CFQ changed and CFQ considered everything a thread was doing sequential I/O operations. Consequently, CFQ did not allow preemption across threads in Qemu. This update increases the preemption threshold and the idling is now limited in the described scenario without the loss of throughput. BZ# 771870 A bug in the splice code has caused the file position on the write side of the sendfile() system call to be incorrectly set to the read side file position. This could result in the data being written to an incorrect offset. Now, sendfile() has been modified to correctly use the current file position for the write side file descriptor, thus fixing this bug. Note Note that in the following common sendfile() scenarios, this bug does not occur: when both read and write file positions are identical and when the file position is not important, for example, if the write side is a socket. BZ# 772884 On large SMP systems, the TSC (Time Stamp Counter) clock frequency could be incorrectly calculated. The discrepancy between the correct value and the incorrect value was within 0.5%. When the system rebooted, this small error would result in the system becoming out of synchronization with an external reference clock (typically a NTP server). With this update, the TSC frequency calculation has been improved and the clock correctly maintains synchronization with external reference clocks. Users should upgrade to these updated packages, which contain backported patches to correct these issues and fix these bugs. The system must be rebooted for this update to take effect. 4.119.10. RHSA-2012:0481 - Moderate: kernel security, bug fix, and enhancement update Updated kernel packages that resolve several security issues, fix number of bugs, and add several enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Security Fixes CVE-2012-0879 , Moderate Numerous reference count leaks were found in the Linux kernel's block layer I/O context handling implementation. This could allow a local, unprivileged user to cause a denial of service. CVE-2012-1090 , Moderate A flaw was found in the Linux kernel's cifs_lookup() implementation. POSIX open during lookup should only be supported for regular files. When non-regular files (for example, a named (FIFO) pipe or other special files) are opened on lookup, it could cause a denial of service. CVE-2012-1097 , Moderate It was found that the Linux kernel's register set (regset) common infrastructure implementation did not check if the required get and set handlers were initialized. A local, unprivileged user could use this flaw to cause a denial of service by performing a register set operation with a ptrace() PTRACE_SETREGSET or PTRACE_GETREGSET request. Red Hat would like to thank H. Peter Anvin for reporting CVE-2012-1097. Bug Fixes BZ# 805458 Previously, if more than a certain number of qdiscs (Classless Queuing Disciplines) using the autohandle mechanism were allocated a soft lock-up error occurred. This update fixes the maximum loop count and adds the cond_resched() call in the loop, thus fixing this bug. BZ# 804961 Concurrent look-up operations of the same inode that was not in the per-AG (Allocation Group) inode cache caused a race condition, triggering warning messages to be returned in the unlock_new_inode() function. Although this bug could only be exposed by NFS or the xfsdump utility, it could lead to inode corruption, inode list corruption, or other related problems. With this update, the XFS_INEW flag is set before inserting the inode into the radix tree. Now, any concurrent look-up operation finds the new inode with XFS_INEW set and the operation is then forced to wait until XFS_INEW is removed, thus fixing this bug. BZ# 802430 Previously, when isolating pages for migration, the migration started at the start of a zone while the free scanner started at the end of the zone. Migration avoids entering a new zone by never going beyond what the free scanner scanned. In very rare cases, nodes overlapped and the migration isolated pages without the LRU lock held, which triggered errors in reclaim or during page freeing. With this update, the isolate_migratepages() function makes a check to ensure that it never isolates pages from a zone it does not hold the LRU lock for, thus fixing this bug. BZ# 802379 An anomaly in the memory map created by the mbind() function caused a segmentation fault in Hotspot Java Virtual Machines with the NUMA-aware Parallel Scavenge garbage collector. A backported upstream patch that fixes mbind() has been provided and the crashes no longer occur in the described scenario. BZ# 786873 Previously, the SFQ qdisc packet scheduler class had no bind_tcf() method. Consequently, if a filter was added with the classid parameter to SFQ, a kernel panic occurred due to a null pointer dereference. With this update, the dummy .unbind_tcf and .put qdisc class options have been added to conform with the behaviour of other schedulers, thus fixing this bug. BZ# 787764 The kernel code checks for conflicts when an application requests a specific port. If there is no conflict, the request is granted. However, the port auto-selection done by the kernel failed when all ports were bound, even if there was an available port with no conflicts. With this update, the port auto-selection code has been fixed to properly use ports with no conflicts. BZ# 789060 Due to a race condition between the notify_on_release() function and task movement between cpuset or memory cgroup directories, a system deadlock could occur. With this update, the cgroup_wq cgroup has been created and both async_rebuild_domains() and check_for_release() functions used for task movements use it, thus fixing this bug. BZ# 789061 Previously, the utime and stime values in the /proc/<pid>/stat file of a multi-threaded process could wrongly decrease when one of its threads exited. A backported patch has been provided to maintain monotonicity of utime and stime in the described scenario, thus fixing this bug. BZ# 801723 The vmxnet3 driver in Red Hat Enterprise Linux 6.2 introduced a regression. Due to an optimization, in which at least 54 bytes of a frame were copied to a contiguous buffer, shorter frames were dropped as the frame did not have 54 bytes available to copy. With this update, transfer size for a buffer is limited to 54 bytes or the frame size, whichever is smaller, and short frames are no longer dropped in the described scenario. BZ# 789373 In the Common Internet File System (CIFS), the oplock break jobs and async callback handlers both use the SLOW-WORK workqueue, which has a finite pool of threads. Previously, these oplock break jobs could end up taking all the running queues waiting for a page lock which blocks the callback required to free this page lock from being completed. This update separates the oplock break jobs into a separate workqueue VERY-SLOW-WORK , allowing the callbacks to be completed successfully and preventing the deadlock. BZ# 789911 Previously, the doorbell register was being unconditionally swapped. If the Blue Frame option was enabled, the register was incorrectly written to the descriptor in the little endian format. Consequently, certain adapters could not communicate over a configured IP address. With this update, the doorbell register is not swapped unconditionally, rather, it is always converted to big endian before it is written to the descriptor, thus fixing this bug. BZ# 790007 Previously, due to a bug in a graphics driver in systems running a future Intel processor with graphics acceleration, attempts to suspend the system to the S3/S4 state failed. This update resolves this issue and transitions to the suspend mode now work correctly in the described scenario. BZ# 790338 Prior to this update, the wrong size was being calculated for the vfinfo structure. Consequently, networking drivers that created a large number of virtual functions caused warning messages to appear when loading and unloading modules. Backported patches from upstream have been provided to resolve this issue, thus fixing this bug. BZ# 790341 Previously, when a MegaRAID 9265/9285 or 9360/9380 controller got a timeout in the megaraid_sas driver, the invalid SCp.ptr pointer could be called from the megasas_reset_timer() function. As a consequence, a kernel panic could occur. An upstream patch has been provided to address this issue and the pointer is now always set correctly. BZ# 790905 Previously, when pages were being migrated via NFS with an active requests on them, if a particular inode ended up deleted, then the VFS called the truncate_inode_pages() function. That function tried to take the page lock, but it was already locked when migrate_page() was called. As a consequence, a deadlock occurred in the code. This bug has been fixed and the migration request is now refused if the PagePrivate parameter is already set, indicating that the page is already associated with an active read or write request. BZ# 795326 Due to invalid calculations of the vruntime variable along with task movement between cgroups, moving tasks between cgroups could cause very long scheduling delays. This update fixes this problem by setting the cfs_rq and curr parameters after holding the rq->lock lock. BZ# 795335 Due to a race condition, running the ifenslave -d bond0 eth0 command to remove the slave interface from the bonding device could cause the system to terminate if a networking packet was being received at the same time. With this update, the race condition has been fixed and the system no longer crashes in the described scenario. BZ# 795338 Previously, an unnecessary assertion could trigger depending on the value of the xpt_pool field. As a consequence, a node could terminate unexpectedly. The xpt_pool field was in fact unnecessary and this update removes it from the sunrpc code, thus preventing this bug. BZ# 797241 Due to a race condition, the mac80211 framework could deauthenticate with an access point (AP) while still scheduling authentication retries with the same AP. If such an authentication attempt timed out, a warning message was returned to kernel log files. With this update, when deauthenticating, pending authentication retry attempts are checked and cancelled if found, thus fixing this bug. BZ# 801718 Prior to this update, the find_busiest_group() function used sched_group->cpu_power in the denominator of a fraction with a value of 0 . Consequently, a kernel panic occurred. This update prevents the divide by zero in the kernel and the panic no longer occurs. BZ# 798572 When the nohz=off kernel parameter was set, kernel could not enter any CPU C-state. With this update, the underlying code has been fixed and transitions to CPU idle states now work as expected. BZ# 797182 Under heavy memory and file system load, the mapping->nrpages == 0 assertion could occur in the end_writeback() function. As a consequence, a kernel panic could occur. This update provides a reliable check for mapping->nrpages that prevent the described assertion, thus fixing this bug. BZ# 797205 Due to a bug in the hid_reset() function, a deadlock could occur when a Dell iDRAC controller was reset. Consequently, its USB keyboard or mouse device became unresponsive. A patch that fixes the underlying code has been provided to address this bug and the hangs no longer occur in the described scenario. BZ# 796828 On a system that created and deleted lots of dynamic devices, the 31-bit Linux ifindex object failed to fit in the 16-bit macvtap minor range, resulting in unusable macvtap devices. The problem primarily occurred in a libvirt -controlled environment when many virtual machines were started or restarted, and caused libvirt to report the following message: With this update, the macvtap 's minor device number allocation has been modified so that virtual machines can now be started and restarted as expected in the described scenario. BZ# 799943 The dm_mirror module can send discard requests. However, the dm_io interface did not support discard requests and running an LVM mirror over a discard-enabled device led to a kernel panic. This update adds support for the discard requests to the dm_io interface and kernel panics no longer occur in the described scenario. BZ# 749248 When a process isolation mechanism such as LXC (Linux Containers) was used and the user space was running without the CAP_SYS_ADMIN identifier set, a jailed root user could bypass the dmesg_restrict protection, creating an inconsistency. Now, writing to dmesg_restrict is only allowed when the root has CAP_SYS_ADMIN set, thus preventing this bug. Enhancements BZ# 789371 With this update, the igb driver has been updated to the latest upstream version 3.2.10-k to provide up-to-date hardware support, features and bug fixes. BZ# 800552 This update provides support for the O_DIRECT flag for files in FUSE (Filesystem in Userspace). This flag minimizes cache effects of the I/O to and from a file. In general, using this flag degrades performance, but it is useful in special situations, such as when applications do their own caching. BZ# 770651 This update adds support for mount options to restrict access to /proc/<PID>/ directories. One of the options is called hidepid= and its value defines how much information about processes is provided to non-owners. The gid= option defines a group that gathers information about all processes. Untrusted users, which are not supposed to monitor tasks in the whole system, should not be added to the group. Users should upgrade to these updated packages, which contain backported patches to resolve these issues, fix these bugs, and add these enhancements. The system must be rebooted for this update to take effect. 4.119.11. RHSA-2012:0571 - Moderate: kernel security and bug fix update Updated kernel packages that resolve several security issues and fix a number of bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Security Fixes CVE-2011-4086 , Moderate A flaw was found in the way the Linux kernel's journal_unmap_buffer() function handled buffer head states. On systems that have an ext4 file system with a journal mounted, a local, unprivileged user could use this flaw to cause a denial of service. CVE-2012-1601 , Moderate A flaw was found in the way the KVM_CREATE_IRQCHIP ioctl was handled. Calling this ioctl when at least one virtual CPU (VCPU) already existed could lead to a NULL pointer dereference later when the VCPU is scheduled to run. A local, unprivileged user on a KVM host could use this flaw to crash the host. Bug Fixes BZ# 810454 Previously, the eth_type_trans() function was called with the VLAN device type set. If a VLAN device contained a MAC address different from the original device, an incorrect packet type was assigned to the host. Consequently, if the VLAN devices were set up on a bonding interface in Adaptive Load Balancing (ALB) mode, the TCP connection could not be established. With this update, the eth_type_trans() function is called with the original device, ensuring that the connection is established as expected. BZ# 801329 When short audio periods were configured, the ALSA PCM midlevel code, shared by all sound cards, could cause audio glitches and other problems. This update adds a time check for double acknowledged interrupts and improves stability of the snd-aloop kernel module, thus fixing this bug. BZ# 802852 Previously, the idmapper utility pre-allocated space for all user and group names on an NFS client in advance. Consequently, page allocation failure could occur, preventing a proper mount of a directory. With this update, the allocation of the names is done dynamically when needed, the size of the allocation table is now greatly reduced, and the allocation failures no longer occur. BZ# 803881 In a Boot-from-San (BFS) installation via certain iSCSI adapters, driver exported sendtarget entries in the sysfs file system but the iscsistart failed to perform discovery. Consequently, a kernel panic occurred during the first boot sequence. With this update, the driver performs the discovery instead, thus preventing this bug. BZ# 810322 The SCSI layer was not using a large enough buffer to properly read the entire BLOCK LIMITS VPD page that is advertised by a storage array. Consequently, the WRITE SAME MAX LEN parameter was read incorrectly and this could result in the block layer issuing discard requests that were too large for the storage array to handle. This update increases the size of the buffer that the BLOCK LIMITS VPD page is read into and the discard requests are now issued with proper size, thus fixing this bug. BZ# 805457 A bug in the try_to_wake_up() function could cause status change from TASK_DEAD to TASK_RUNNING in a race condition with an SMI (system management interrupt) or a guest environment of a virtual machine. As a consequence, the exited task was scheduled again and a kernel panic occurred. This update fixes the race condition in the do_exit() function and the panic no longer occurs in the described scenario. BZ# 806205 When expired user credentials were used in the RENEW() calls, the calls failed. Consequently, all access to the NFS share on the client became unresponsive. With this update, the machine credentials are used with these calls instead, thus preventing this bug most of the time. If no machine credentials are available, user credentials are used as before. BZ# 806859 When the python-perf subpackage was installed, the debug information for the bindings were added to the debuginfo-common subpackage, making it unable to install the debuginfo-common package of a different version. With this update, a separate subpackage is used to store debug information for python-perf , thus fixing this bug. BZ# 809388 Due to the netdevice handler for FCoE (Fibre Channel over Ethernet) and the exit path blocking the keventd work queue, the destroy operation on an NPIV (N_Port ID Virtualization) FCoE port led to a deadlock interdependency and caused the system to become unresponsive. With this update, the destroy_work item has been moved to its own work queue and is now executed in the context of the user space process requesting the destroy, thus preventing this bug. BZ# 809372 The fcoe_transport_destroy path uses a work queue to destroy the specified FCoE interface. Previously, the destroy_work work queue item blocked another single-threaded work queue. Consequently, a deadlock between queues occurred and the system became unresponsive. With this update, fcoe_transport_destroy has been modified and is now a synchronous operation, allowing to break the deadlock dependency. As a result, destroy operations are now able to complete properly, thus fixing this bug. BZ# 809378 During tests with active I/O on 256 LUNs (logical unit numbers) over FCoE, a large number SCSI mid layer error messages were returned. As a consequence, the system became unresponsive. This bug has been fixed by limiting the source of the error messages and the hangs no longer occur in the described scenario. BZ# 807158 When running AF_IUCV socket programs with IUCV transport, an IUCV SEVER call was missing in the callback of a receiving IUCV SEVER interrupt. Under certain circumstances, this could prevent z/VM from removing the corresponding IUCV-path completely. This update adds the IUCV SEVER call to the callback, thus fixing this bug. In addition, internal socket states have been merged, thus simplifying the AF_IUCV code. BZ# 809374 Previously, the AMD IOMMU (input/output memory management unit) driver could use the MSI address range for DMA (direct memory access) addresses. As a consequence, DMA could fail and spurious interrupts would occur if this address range was used. With this update, the MSI address range is reserved to prevent the driver from allocating wrong addresses and DMA is now assured to work as expected in the described scenario. BZ# 811299 Due to incorrect use of the list_for_each_entry_safe() macro, the enumeration of remote procedure calls (RPCs) priority wait queue tasks stored in the tk_wait.links list failed. As a consequence, the rpc_wake_up() and rpc_wake_up_status() functions failed to wake up all tasks. This caused the system to become unresponsive and could significantly decrease system performance. Now, the list_for_each_entry_safe() macro is no longer used in rpc_wake_up() , ensuring reasonable system performance. BZ# 809376 The AMD IOMMU driver used wrong shift direction in the alloc_new_range() function. Consequently, the system could terminate unexpectedly or become unresponsive. This update fixes the code and crashes and hangs no longer occur in the described scenario. BZ# 809104 Previously, a bonding device had always the UFO (UDP Fragmentation Offload) feature enabled even when no slave interfaces supported UFO. Consequently, the tracepath command could not return correct path MTU. With this update, UFO is no longer configured for bonding interfaces by default if the underlying hardware does not support it, thus fixing this bug. BZ# 807426 Previously, when the PCI driver switched from MSI/MSI-X (Message Signaled Interrupts) to the INTx emulation while shutting down a device, an unwanted interrupt was generated. Consequently, interrupt handler of IPMI was called repeatedly, causing the system to become unresponsive. This update adds a parameter to avoid using MSI/MSI-X for PCIe native hot plug operations and the hangs no longer occur in the described scenario. BZ# 811135 On NFS, when repeatedly reading a directory, content of which kept changing, the client issued the same readdir request twice. Consequently, the following warning messages were returned to the dmesg output: This update fixes the bug by turning off the loop detection and letting the NFS client try to recover in the described scenario and the messages are no longer returned. BZ# 806906 The Intelligent Platform Management Interface (IPMI) specification requires a minimum communication timeout of five seconds. Previously, the kernel incorrectly used a timeout of one second. This could result in failures to communicate with Baseboard Management Controllers (BMC) under certain circumstances. With this update, the timeout has been increased to five seconds to prevent such problems. BZ# 804548 Prior to this update, bugs in the close() and send() functions caused delays and operation of these two functions took too long to complete. This update adds the IUCV_CLOSED state change and improves locking for close() . Also, the net_device handling has been improved in send() . As a result, the delays no longer occur. BZ# 804547 When AF_IUCV sockets were using the HiperSockets transport, maximum message size for such transports depended on the MTU (maximum transmission unit) size of the HiperSockets device bound to a AF_IUCV socket. However, a socket program could not determine maximum size of a message. This update adds the MSGSIZE option for the getsockopt() function. Through this option, the maximum message size can be read and properly handled by AF_IUCV . BZ# 809391 Previously, on a system where intermediate P-states were disabled, the powernow-k8 driver could cause a kernel panic in the cpufreq subsystem. Additionally, not all available P-states were recognized by the driver. This update modifies the drive code so that it now properly recognizes all P-states and does not cause the panics in the described scenario. Users should upgrade to these updated packages, which contain backported patches to resolve these issues and fix these bugs. The system must be rebooted for this update to take effect. 4.119.12. RHBA-2012:0124 - kernel bug fix update Updated kernel packages that fix one bug are now available for Red Hat Enterprise Linux 6. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fix BZ# 781974 An insufficiently designed calculation in the CPU accelerator in the kernel caused an arithmetic overflow in the sched_clock() function when system uptime exceeded 208.5 days. This overflow led to a kernel panic on the systems using the Time Stamp Counter (TSC) or Virtual Machine Interface (VMI) clock source. This update corrects the aforementioned calculation so that this arithmetic overflow and kernel panic can no longer occur under these circumstances. All users are advised to upgrade to these updated packages, which fix this bug. The system must be rebooted for this update to take effect. 4.119.13. RHSA-2012:0743 - Important: kernel security and bug fix update Updated kernel packages that resolve several security issues and fix a number of bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Security Fixes CVE-2012-0044 , Important A local, unprivileged user could use an integer overflow flaw in drm_mode_dirtyfb_ioctl() to cause a denial of service or escalate their privileges. CVE-2012-2119 , Important A buffer overflow flaw was found in the macvtap device driver, used for creating a bridged network between the guest and the host in KVM (Kernel-based Virtual Machine) environments. A privileged guest user in a KVM guest could use this flaw to crash the host. Note Note that this issue only affected hosts that have the vhost_net module loaded with the experimental_zcopytx module option enabled (it is not enabled by default), and that also have macvtap configured for at least one guest. CVE-2012-2123 , Important When a set user ID (setuid) application is executed, certain personality flags for controlling the application's behavior are cleared (that is, a privileged application will not be affected by those flags). It was found that those flags were not cleared if the application was made privileged via file system capabilities. A local, unprivileged user could use this flaw to change the behavior of such applications, allowing them to bypass intended restrictions. Note that for default installations, no application shipped by Red Hat for Red Hat Enterprise Linux is made privileged via file system capabilities. CVE-2012-2136 , Important It was found that the data_len parameter of the sock_alloc_send_pskb() function in the Linux kernel's networking implementation was not validated before use. A privileged guest user in a KVM guest could use this flaw to crash the host or, possibly, escalate their privileges on the host. CVE-2012-2137 , Important A buffer overflow flaw was found in the setup_routing_entry() function in the KVM subsystem of the Linux kernel in the way the Message Signaled Interrupts (MSI) routing entry was handled. A local, unprivileged user could use this flaw to cause a denial of service or, possibly, escalate their privileges. CVE-2012-1179 , Moderate A race condition was found in the Linux kernel's memory management subsystem in the way pmd_none_or_clear_bad() , when called with mmap_sem in read mode, and Transparent Huge Pages (THP) page faults interacted. A privileged user in a KVM guest with the ballooning functionality enabled could potentially use this flaw to crash the host. A local, unprivileged user could use this flaw to crash the system. CVE-2012-2121 , Moderate A flaw was found in the way device memory was handled during guest device removal. Upon successful device removal, memory used by the device was not properly unmapped from the corresponding IOMMU or properly released from the kernel, leading to a memory leak. A malicious user on a KVM host who has the ability to assign a device to a guest could use this flaw to crash the host. CVE-2012-2372 , Moderate A flaw was found in the Linux kernel's Reliable Datagram Sockets (RDS) protocol implementation. A local, unprivileged user could use this flaw to cause a denial of service. CVE-2012-2373 , Moderate A race condition was found in the Linux kernel's memory management subsystem in the way pmd_populate() and pte_offset_map_lock() interacted on 32-bit x86 systems with more than 4GB of RAM. A local, unprivileged user could use this flaw to cause a denial of service. Red Hat would like to thank Chen Haogang for reporting CVE-2012-0044. Bug Fixes BZ# 823903 Previously, if creation of an MFN (Machine Frame Number) was lazily deferred, the MFN could appear invalid when is was not. If at this point read_pmd_atomic() was called, which then called the paravirtualized __pmd() function, and returned zero, the kernel could terminate unexpectedly. With this update, the __pmd() call is avoided in the described scenario and the open-coded compound literal is returned instead, thus fixing this bug. BZ# 812953 The kdump utility does not support Xen para-virtualized (PV) drivers on Hardware Virtualized Machine (HVM) guests in Red Hat Enterprise Linux 6. Therefore, kdump failed to start if the guest had loaded PV drivers. This update modifies underlying code to allow kdump to start without PV drivers on HVM guests configured with PV drivers. BZ# 816226 Various problems were discovered in the iwlwifi driver happening in the 5 GHz band. Consequently, roaming between access points (AP) on 2.4 GHz and 5 GHz did not work properly. This update adds a new option to the driver that disables the 5 GHz band support. BZ# 816225 The ctx->vif identifier is dereferenced in different parts of the iwlwifi code. When it was set to null before requesting hardware reset, the kernel could terminate unexpectedly. An upstream patch has been provided to address this issue and the crashes no longer occur in the described scenario. BZ# 824429 Previously, with a transparent proxy configured and under high load, the kernel could start to drop packets, return error messages such as ip_rt_bug: addr1 -> addr2, ? , and, under rare circumstances, terminate unexpectedly. This update provides patches addressing these issues and the described problems no longer occur. BZ# 819614 Prior to this update, Active State Power Management (ASPM) was not properly disabled, and this interfered with the correct operation of the hpsa driver. Certain HP BIOS versions do not report a proper disable bit, and when the kernel fails to read this bit, the kernel defaults to enabling ASPM. Consequently, certain servers equipped with a HP Smart Array controller were unable to boot unless the pcie_aspm=off option was specified on the kernel command line. A backported patch has been provided to address this problem, ASPM is now properly disabled, and the system now boots up properly in the described scenario. BZ# 799946 When an adapter was taken down over the RoCE (RDMA over Converged Ethernet) protocol while a workload was running, kernel terminated unexpectedly. A patch has been provided to address this issue and the crash no longer occurs in the described scenario. BZ# 818504 Previously, network drivers that had Large Receive Offload (LRO) enabled by default caused the system to run slow, lose frame, and eventually prevent communication, when using software bridging. With this update, LRO is automatically disabled by the kernel on systems with a bridged configuration, thus preventing this bug. BZ# 818503 Due to a running cursor blink timer, when attempting to hibernate certain types of laptops, the i915 kernel driver could corrupt memory. Consequently, the kernel could crash unexpectedly. An upstream patch has been provided to make the i915 kernel driver use the correct console suspend API and the hibernate function now works as expected. BZ# 817466 The slave member of struct aggregator does not necessarily point to a slave which is part of the aggregator. It points to the slave structure containing the aggregator structure, while completely different slaves (or no slaves at all) may be part of the aggregator. Due to a regression, the agg_device_up() function wrongly used agg->slave to find the state of the aggregator. Consequently, wrong active aggregator was reported to the /proc/net/bonding/bond0 file. With this update, agg->lag_ports->slave is used in the described scenario instead, thus fixing this bug. BZ# 816271 As part of mapping the application's memory, a buffer to hold page pointers is allocated and the count of mapped pages is stored in the do_dio field. A non-zero do_dio marks that direct I/O is in use. However, do_dio is only one byte in size. Previously, mapping 256 pages overflowed do_dio and caused it to be set to 0 . As a consequence, when large enough number of read or write requests were sent using the st driver's direct I/O path, a memory leak could occur in the driver. This update increases the size of do_dio , thus preventing this bug. BZ# 810125 Previously, requests for large data blocks with the ZSECSENDCPRB ioctl() system call failed due to an invalid parameter. A misleading error code was returned, concealing the real problem. With this update, the parameter for the ZSECSENDCPRB request code constant is validated with the correct maximum value. Now, if the parameter length is not valid, the EINVAL error code is returned, thus fixing this bug. BZ# 814657 While doing wireless roaming, under stressed conditions, an error could occur in the ieee80211_mgd_probe_ap_send() function and cause a kernel panic. With this update, the mac80211 MLME (MAC Layer Management Entity) code has been rewritten, thus fixing this bug. BZ# 816197 Previously, secondary, tertiary, and other IP addresses added to bond interfaces could overwrite the bond->master_ip and vlan_ip values. Consequently, a wrong IP address could be occasionally used, the MII (Media Independent Interface) status of the backup slave interface went down, and the bonding master interfaces were switching. This update removes the master_ip and vlan_ip elements from the bonding and vlan_entry structures, respectively. Instead, devices are directly queried for the optimal source IP address for ARP requests, thus fixing this bug. BZ# 818505 Red Hat Enterprise Linux 6.1 introduced naming scheme adjustments for emulated SCSI disks used with paravirtual drivers to prevent namespace clashes between emulated IDE and emulated SCSI disks. Both emulated disk types use the paravirt block device xvd . Consider the example below: Table 4.1. The naming scheme example Red Hat Enterprise Linux 6.0 Red Hat Enterprise Linux 6.1 or later emulated IDE hda -> xvda unchanged emulated SCSI sda -> xvda sda -> xvde, sdb -> xvdf, ... This update introduces a new module parameter, xen_blkfront.sda_is_xvda , that provides a seamless upgrade path from 6.0 to 6.3 kernel release. The default value of xen_blkfront.sda_is_xvda is 0 and it keeps the naming scheme consistent with 6.1 and later releases. When xen_blkfront.sda_is_xvda is set to 1 , the naming scheme reverts to the 6.0-compatible mode. Note Note that when upgrading from 6.0 to 6.3 release, if a virtual machine specifies emulated SCSI devices and utilizes paravirtual drivers and uses explicit disk names such as xvd[a-d] , it is advised to add the xen_blkfront.sda_is_xvda=1 parameter to the kernel command line before performing the upgrade. BZ# 809399 Due to an off-by-one bug in max_blocks checks, on the 64-bit PowerPC architecture, the tmpfs file system did not respect the size= parameter and consequently reported incorrect number of available blocks. A backported upstream patch has been provided to address this issue and tmpfs now respects the size= parameter as expected. Users should upgrade to these updated packages, which contain backported patches to resolve these issues and fix these bugs. The system must be rebooted for this update to take effect. 4.119.14. RHBA-2013:1169 - kernel bug fix update Updated kernel packages that fix several bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, which is the core of any Linux operating system. Bug Fixes BZ# 977666 A race condition between the read_swap_cache_async() and get_swap_page() functions in the Memory management (mm) code could lead to a deadlock situation. The deadlock could occur only on systems that deployed swap partitions on devices supporting block DISCARD and TRIM operations if kernel preemption was disabled (the !CONFIG_PREEMPT parameter). If the read_swap_cache_async() function was given a SWAP_HAS_CACHE entry that did not have a page in the swap cache yet, a DISCARD operation was performed in the scan_swap_map() function. Consequently, completion of an I/O operation was scheduled on the same CPU's working queue the read_swap_cache_async() was running on. This caused the thread in read_swap_cache_async() to loop indefinitely around its "-EEXIST" case, rendering the system unresponsive. The problem has been fixed by adding an explicit cond_resched() call to read_swap_cache_async(), which allows other tasks to run on the affected CPU, and thus avoiding the deadlock. BZ# 982113 The bnx2x driver could have previously reported an occasional MDC/MDIO timeout error along with the loss of the link connection. This could happen in environments using an older boot code because the MDIO clock was set in the beginning of each boot code sequence instead of per CL45 command. To avoid this problem, the bnx2x driver now sets the MDIO clock per CL45 command. Additionally, the MDIO clock is now implemented per EMAC register instead of per port number, which prevents ports from using different EMAC addresses for different PHY accesses. Also, boot code or Management Firmware (MFW) upgrade is required to prevent the boot code (firmware) from taking over link ownership if the driver's pulse is delayed. The BCM57711 card requires boot code version 6.2.24 or later, and the BCM57712/578xx cards require MFW version 7.4.22 or later. BZ# 982467 If the audit queue is too long, the kernel schedules the kauditd daemon to alleviate the load on the audit queue. Previously, if the current audit process had any pending signals in such a situation, it entered a busy-wait loop for the duration of an audit backlog timeout because the wait_for_auditd() function was called as an interruptible task. This could lead to system lockup in non-preemptive uniprocessor systems. This update fixes the problem by setting wait_for_auditd() as uninterruptible. BZ# 988225 The kernel could rarely terminate instead of creating a dump file when a multi-threaded process using FPU aborted. This happened because the kernel did not wait until all threads became inactive and attempted to dump the FPU state of active threads into memory which triggered a BUG_ON() routine. A patch addressing this problem has been applied and the kernel now waits for the threads to become inactive before dumping their FPU state into memory. BZ# 990080 Due to hardware limits, the be2net adapter cannot handle packets with size greater than 64 KB including the Ethernet header. Therefore, if the be2net adapter received xmit requests exceeding this size, it was unable to process the requests, produced error messages and could become unresponsive. To prevent these problems, GSO (Generic Segmentation Offload) maximum size has been reduced to account for the Ethernet header. BZ# 990085 BE family hardware could falsely indicate an unrecoverable error (UE) on certain platforms and stop further access to be2net-based network interface cards (NICs). A patch has been applied to disable the code that stops further access to hardware for BE family network interface cards (NICs). For a real UE, it is not necessary as the corresponding hardware block is not accessible in this situation. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 4.119.15. RHSA-2013:0840 - Important: kernel security update Updated kernel packages that fix one security issue are now available for Red Hat Enterprise Linux 6.2 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fix CVE-2013-2094 , Important This update fixes the following security issue: * It was found that the Red Hat Enterprise Linux 6.1 kernel update (RHSA-2011:0542) introduced an integer conversion issue in the Linux kernel's Performance Events implementation. This led to a user-supplied index into the perf_swevent_enabled array not being validated properly, resulting in out-of-bounds kernel memory access. A local, unprivileged user could use this flaw to escalate their privileges. A public exploit that affects Red Hat Enterprise Linux 6 is available. Refer to Red Hat Knowledge Solution 373743, linked to in the References, for further information and mitigation instructions for users who are unable to immediately apply this update. Users should upgrade to these updated packages, which contain a backported patch to correct this issue. The system must be rebooted for this update to take effect. 4.119.16. RHBA-2013:1397 - kernel bug fix update Updated kernel packages that fix two bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, which is the core of any Linux operating system. Bug Fixes BZ# 1004659 Previously, the be2net driver failed to detect the last port of BE3 (BladeEngine 3) when UMC (Universal Multi-Channel) was enabled. Consequently, two of the ports could not be used by users and error messages were returned. A patch has been provided to fix this bug and be2net driver now detects all ports without returning any error messages. BZ# 1005060 When a copy-on-write fault happened on a Transparent Huge Page (THP), the 2 MB THP caused the cgroup to exceed the "memory.limit_in_bytes" value but the individual 4 KB page was not exceeded. Consequently, the Out of Memory (OOM) killer killed processes outside of a memory cgroup when one or more processes inside that memory cgroup exceeded the "memory.limit_in_bytes" value. With this update, the 2 MB THP is correctly split into 4 KB pages when the "memory.limit_in_bytes" value is exceeded. The OOM kill is delivered within the memory cgroup; tasks outside the memory cgroups are no longer killed by the OOM killer. Users should upgrade to these updated packages, which contain backported patches to correct these bugs. The system must be rebooted for this update to take effect. 4.119.17. RHSA-2013:1519 - Important: kernel security and bug fix update Updated kernel packages that fix two security issues and several bugs are now available for Red Hat Enterprise Linux 6.2 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-4508 , Important A race condition was found in the way asynchronous I/O and fallocate() interacted when using the ext4 file system. A local, unprivileged user could use this flaw to expose random data from an extent whose data blocks have not yet been written, and thus contain data from a deleted file. CVE-2013-4299 , Moderate An information leak flaw was found in the way Linux kernel's device mapper subsystem, under certain conditions, interpreted data written to snapshot block devices. An attacker could use this flaw to read data from disk blocks in free space, which are normally inaccessible. Red Hat would like to thank Theodore Ts'o for reporting CVE-2012-4508, and Fujitsu for reporting CVE-2013-4299. Upstream acknowledges Dmitry Monakhov as the original reporter of CVE-2012-4508. Bug Fixes BZ# 1017898 When the Audit subsystem was under heavy load, it could loop infinitely in the audit_log_start() function instead of failing over to the error recovery code. This would cause soft lockups in the kernel. With this update, the timeout condition in the audit_log_start() function has been modified to properly fail over when necessary. BZ# 1017902 When handling Memory Type Range Registers (MTRRs), the stop_one_cpu_nowait() function could potentially be executed in parallel with the stop_machine() function, which resulted in a deadlock. The MTRR handling logic now uses the stop_machine() function and makes use of mutual exclusion to avoid the aforementioned deadlock. BZ# 1020519 Power-limit notification interrupts were enabled by default. This could lead to degradation of system performance or even render the system unusable on certain platforms, such as Dell PowerEdge servers. Power-limit notification interrupts have been disabled by default and a new kernel command line parameter "int_pln_enable" has been added to allow users to observe these events using the existing system counters. Power-limit notification messages are also no longer displayed on the console. The affected platforms no longer suffer from degraded system performance due to this problem. BZ# 1021950 Package level thermal and power limit events are not defined as MCE errors for the x86 architecture. However, the mcelog utility erroneously reported these events as MCE errors with the following message: kernel: [Hardware Error]: Machine check events logged Package level thermal and power limit events are no longer reported as MCE errors by mcelog. When these events are triggered, they are now reported only in the respective counters in sysfs (specifically, /sys/devices/system/cpu/cpu≶number>/thermal_throttle/). BZ# 1024453 An insufficiently designed calculation in the CPU accelerator could cause an arithmetic overflow in the set_cyc2ns_scale() function if the system uptime exceeded 208 days prior to using kexec to boot into a new kernel. This overflow led to a kernel panic on systems using the Time Stamp Counter (TSC) clock source, primarily systems using Intel Xeon E5 processors that do not reset TSC on soft power cycles. A patch has been applied to modify the calculation so that this arithmetic overflow and kernel panic can no longer occur under these circumstances. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 4.119.18. RHSA-2013:0882 - Important: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6.2 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2013-0311 , Important This update fixes the following security issues: * A flaw was found in the way the vhost kernel module handled descriptors that spanned multiple regions. A privileged guest user in a KVM (Kernel-based Virtual Machine) guest could use this flaw to crash the host or, potentially, escalate their privileges on the host. CVE-2012-4461 , Moderate A flaw was found in the way the KVM subsystem handled guests attempting to run with the X86_CR4_OSXSAVE CPU feature flag set. On hosts without the XSAVE CPU feature, a local, unprivileged user could use this flaw to crash the host system. (The "grep --color xsave /proc/cpuinfo" command can be used to verify if your system has the XSAVE CPU feature.) CVE-2012-4542 , Moderate It was found that the default SCSI command filter does not accommodate commands that overlap across device classes. A privileged guest user could potentially use this flaw to write arbitrary data to a LUN that is passed-through as read-only. CVE-2013-1767 , Low A use-after-free flaw was found in the tmpfs implementation. A local user able to mount and unmount a tmpfs file system could use this flaw to cause a denial of service or, potentially, escalate their privileges. Red Hat would like to thank Jon Howell for reporting CVE-2012-4461. CVE-2012-4542 was discovered by Paolo Bonzini of Red Hat. Bug Fixes BZ# 960409 Previously, when open(2) system calls were processed, the GETATTR routine did not check to see if valid attributes were also returned. As a result, the open() call succeeded with invalid attributes instead of failing in such a case. This update adds the missing check, and the open() call succeeds only when valid attributes are returned. BZ# 960418 Previously, the fsync(2) system call incorrectly returned the EIO (Input/Output) error instead of the ENOSPC (No space left on device) error. This was due to incorrect error handling in the page cache. This problem has been fixed and the correct error value is now returned. BZ# 960423 In the RPC code, when a network socket backed up due to high network traffic, a timer was set causing a retransmission, which in turn could cause an even larger amount of network traffic to be generated. To prevent this problem, the RPC code now waits for the socket to empty instead of setting the timer. BZ# 955502 This update fixes a number of bugs in the be2iscsi driver for ServerEngines BladeEngine 2 Open iSCSI devices. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 4.119.19. RHBA-2013:0584 - kernel bug fix update Updated kernel packages that fix two bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, which is the core of any Linux operating system. Bug Fixes BZ# 891862 Previously, NFS mounts failed against Microsoft Windows 8 servers, because the Windows server contained support for the minor version 1 (v4.1) of the NFS version 4 protocol only, along with support for versions 2 and 3. The lack of the minor version 0 (v4.0) support caused Red Hat Enterprise Linux 6 clients to fail instead of rolling back to version 3 as expected. This update fixes this bug and mounting an NFS export works as expected. BZ# 905433 If Time Stamp Counter (TSC) kHz calibration failed, usually on a Red Hat Enterprise Linux 6 virtual machine running inside of QEMU, the init_tsc_clocksource() function divided by zero. This was due to a missing check to verify if the tsc_khz variable is of a non-zero value. Consequently, booting the kernel on such a machine led to a kernel panic. This update adds the missing check to prevent this problem and TSC calibration functions normally. Users should upgrade to these updated packages, which contain backported patches to fix these bugs. The system must be rebooted for this update to take effect. | [
"[bnx2x_extract_max_cfg:1079(eth11)]Illegal configuration detected for Max BW - using 100 instead",
"A problem has been detected and windows has been shut down to prevent damage to your computer.",
"struct mmsghdr { struct msghdr msg_hdr; unsigned msg_len; }; ssize_t sendmmsg(int socket, struct mmsghdr *datagrams, int vlen, int flags);",
"Error starting domain: cannot open macvtap tap device /dev/tap222364: No such device or address",
"NFS: directory A/B/C contains a readdir loop."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/kernel |
Chapter 1. About Serverless | Chapter 1. About Serverless 1.1. OpenShift Serverless overview OpenShift Serverless provides Kubernetes native building blocks that enable developers to create and deploy serverless, event-driven applications on OpenShift Container Platform. OpenShift Serverless is based on the open source Knative project , which provides portability and consistency for hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. Note Because OpenShift Serverless releases on a different cadence from OpenShift Container Platform, the OpenShift Serverless documentation is now available as a separate documentation set at Red Hat OpenShift Serverless . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/serverless/about-serverless |
Release notes | Release notes builds for Red Hat OpenShift 1.3 Highlights of what is new and what has changed with this OpenShift Builds release Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html-single/release_notes/index |
A.4. Examples | A.4. Examples A.4.1. GET Request with cURL Example A.1. GET request The following GET request lists the virtual machines in the vms collection. Note that a GET request does not contain a body. Adapt the method ( GET ), header ( Accept: application/xml ) and URI ( https:// [RHEVM-Host] :443/ovirt-engine/api/vms ) into the following cURL command: An XML representation of the vms collection displays. A.4.2. POST Request with cURL Example A.2. POST request The following POST request creates a virtual machine in the vms collection. Note that a POST request requires a body. Adapt the method ( POST ), headers ( Accept: application/xml and Content-type: application/xml ), URI ( https:// [RHEVM-Host] :443/ovirt-engine/api/vms ) and request body into the following cURL command: The REST API creates a new virtual machine and displays an XML representation of the resource. A.4.3. PUT Request with cURL Example A.3. PUT request The following PUT request updates the memory of a virtual machine resource. Note that a PUT request requires a body. Adapt the method ( PUT ), headers ( Accept: application/xml and Content-type: application/xml ), URI ( https:// [RHEVM-Host] :443/ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 ) and request body into the following cURL command: The REST API updates the virtual machine with a new memory configuration. A.4.4. DELETE Request with cURL Example A.4. DELETE request The following DELETE request removes a virtual machine resource. Adapt the method ( DELETE ) and URI ( https:// [RHEVM-Host] :443/ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 ) into the following cURL command: The REST API removes the virtual machine. Note the Accept: application/xml request header is optional due to the empty result of DELETE requests. A.4.5. DELETE Request Including Body with cURL Example A.5. DELETE request with body The following DELETE request force removes a virtual machine resource as indicated with the optional body. Adapt the method ( DELETE ), headers ( Accept: application/xml and Content-type: application/xml ), URI ( https:// [RHEVM-Host] :443/ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 ) and request body into the following cURL command: The REST API force removes the virtual machine. | [
"GET /ovirt-engine/api/vms HTTP/1.1 Accept: application/xml",
"curl -X GET -H \"Accept: application/xml\" -u [USER:PASS] --cacert [CERT] https:// [RHEVM-Host] :443/ovirt-engine/api/vms",
"POST /ovirt-engine/api/vms HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <name>vm1</name> <cluster> <name>default</name> </cluster> <template> <name>Blank</name> </template> <memory>536870912</memory> <os> <boot dev=\"hd\"/> </os> </vm>",
"curl -X POST -H \"Accept: application/xml\" -H \"Content-type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<vm><name>vm1</name><cluster><name>default</name></cluster><template><name>Blank</name></template><memory>536870912</memory><os><boot dev='hd'/></os></vm>\" https:// [RHEVM-Host] :443/ovirt-engine/api/vms",
"PUT /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <memory>1073741824</memory> </vm>",
"curl -X PUT -H \"Accept: application/xml\" -H \"Content-type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<vm><memory>1073741824</memory></vm>\" https:// [RHEVM-Host] :443//ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c039",
"DELETE /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1",
"curl -X DELETE -u [USER:PASS] --cacert [CERT] https:// [RHEVM-Host] :443//ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c039",
"DELETE /ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c0399 HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <force>true</force> </action>",
"curl -X DELETE -H \"Accept: application/xml\" -H \"Content-type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<action><force>true</force></action>\" https:// [RHEVM-Host] :443//ovirt-engine/api/vms/082c794b-771f-452f-83c9-b2b5a19c039"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-examples |
Preface | Preface As a data scientist, you can organize your data science work into a single project. A data science project in OpenShift AI can consist of the following components: Workbenches Creating a workbench allows you to work with models in your preferred IDE, such as JupyterLab. Cluster storage For data science projects that require data retention, you can add cluster storage to the project. Connections Adding a connection to your project allows you to connect data inputs to your workbenches. Pipelines Standardize and automate machine learning workflows to enable you to further enhance and deploy your data science models. Models and model servers Deploy a trained data science model to serve intelligent applications. Your model is deployed with an endpoint that allows applications to send requests to the model. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_on_data_science_projects/pr01 |
Chapter 7. Advisories related to this release | Chapter 7. Advisories related to this release The following advisories have been issued to document enhancements, bugfixes, and CVE fixes included in this release: RHSA-2023:4628 RHSA-2023:4629 | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_release_notes/errata |
4.4. Permanent Changes in SELinux States and Modes | 4.4. Permanent Changes in SELinux States and Modes As discussed in Section 1.4, "SELinux States and Modes" , SELinux can be enabled or disabled. When enabled, SELinux has two modes: enforcing and permissive. Use the getenforce or sestatus commands to check in which mode SELinux is running. The getenforce command returns Enforcing , Permissive , or Disabled . The sestatus command returns the SELinux status and the SELinux policy being used: Note When systems run SELinux in permissive mode, users are able to label files incorrectly. Files created while SELinux is disabled are not labeled at all. This behavior causes problems when changing to enforcing mode because files are labeled incorrectly or are not labeled at all. To prevent incorrectly labeled and unlabeled files from causing problems, file systems are automatically relabeled when changing from the disabled state to permissive or enforcing mode. 4.4.1. Enabling SELinux When enabled, SELinux can run in one of two modes: enforcing or permissive. The following sections show how to permanently change into these modes. While enabling SELinux on systems that previously had it disabled, to avoid problems, such as systems unable to boot or process failures, Red Hat recommends to follow this procedure: Enable SELinux in permissive mode. For more information, see Section 4.4.1.1, "Permissive Mode" . Reboot your system. Check for SELinux denial messages. For more information, see Section 11.3.5, "Searching For and Viewing Denials" . If there are no denials, switch to enforcing mode. For more information, see Section 4.4.1.2, "Enforcing Mode" . To run custom applications with SELinux in enforcing mode, choose one of the following scenarios: Run your application in the unconfined_service_t domain. See Section 3.2, "Unconfined Processes" for more information. Write a new policy for your application. See the Writing Custom SELinux Policy Knowledgebase article for more information. 4.4.1.1. Permissive Mode When SELinux is running in permissive mode, SELinux policy is not enforced. The system remains operational and SELinux does not deny any operations but only logs AVC messages, which can be then used for troubleshooting, debugging, and SELinux policy improvements. Each AVC is logged only once in this case. To permanently change mode to permissive, follow the procedure below: Procedure 4.2. Changing to Permissive Mode Edit the /etc/selinux/config file as follows: Reboot the system: 4.4.1.2. Enforcing Mode When SELinux is running in enforcing mode, it enforces the SELinux policy and denies access based on SELinux policy rules. In Red Hat Enterprise Linux, enforcing mode is enabled by default when the system was initially installed with SELinux. If SELinux was disabled, follow the procedure below to change mode to enforcing again: Procedure 4.3. Changing to Enforcing Mode This procedure assumes that the selinux-policy-targeted , selinux-policy , libselinux , libselinux-python , libselinux-utils , policycoreutils , and policycoreutils-python packages are installed. To verify that the packages are installed, use the following command: rpm -q package_name Edit the /etc/selinux/config file as follows: Reboot the system: On the boot, SELinux relabels all the files and directories within the system and adds SELinux context for files and directories that were created when SELinux was disabled. Note After changing to enforcing mode, SELinux may deny some actions because of incorrect or missing SELinux policy rules. To view what actions SELinux denies, enter the following command as root: Alternatively, with the setroubleshoot-server package installed, enter the following command as root: If SELinux denies some actions, see Chapter 11, Troubleshooting for information about troubleshooting. Temporary changes in modes are covered in Section 1.4, "SELinux States and Modes" . 4.4.2. Disabling SELinux When SELinux is disabled, SELinux policy is not loaded at all; it is not enforced and AVC messages are not logged. Therefore, all benefits of running SELinux listed in Section 1.1, "Benefits of running SELinux" are lost. Important Red Hat strongly recommends to use permissive mode instead of permanently disabling SELinux. See Section 4.4.1.1, "Permissive Mode" for more information about permissive mode. To permanently disable SELinux, follow the procedure below: Procedure 4.4. Disabling SELinux Configure SELINUX=disabled in the /etc/selinux/config file: Reboot your system. After reboot, confirm that the getenforce command returns Disabled : | [
"~]USD sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 30",
"This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted",
"~]# reboot",
"This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= enforcing SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted",
"~]# reboot",
"~]# ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today",
"~]# grep \"SELinux is preventing\" /var/log/messages",
"This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= disabled SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted",
"~]USD getenforce Disabled"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-changing_selinux_modes |
Chapter 4. Adding user preferences | Chapter 4. Adding user preferences You can change the default preferences for your profile to meet your requirements. You can set your default project, topology view (graph or list), editing medium (form or YAML), language preferences, and resource type. The changes made to the user preferences are automatically saved. 4.1. Setting user preferences You can set the default user preferences for your cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Use the masthead to access the user preferences under the user profile. In the General section: In the Theme field, you can set the theme that you want to work in. The console defaults to the selected theme each time you log in. In the Perspective field, you can set the default perspective you want to be logged in to. You can select the Administrator or the Developer perspective as required. If a perspective is not selected, you are logged into the perspective you last visited. In the Project field, select a project you want to work in. The console defaults to the project every time you log in. In the Topology field, you can set the topology view to default to the graph or list view. If not selected, the console defaults to the last view you used. In the Create/Edit resource method field, you can set a preference for creating or editing a resource. If both the form and YAML options are available, the console defaults to your selection. In the Language section, select Default browser language to use the default browser language settings. Otherwise, select the language that you want to use for the console. In the Notifications section, you can toggle display notifications created by users for specific projects on the Overview page or notification drawer. In the Applications section: You can view the default Resource type . For example, if the OpenShift Serverless Operator is installed, the default resource type is Serverless Deployment . Otherwise, the default resource type is Deployment . You can select another resource type to be the default resource type from the Resource Type field. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/web_console/adding-user-preferences |
Chapter 4. Alerts | Chapter 4. Alerts 4.1. Setting up alerts For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File, and the object dashboards. These alerts are not available for external Mode. Note It might take a few minutes for alerts to be shown in the alert panel, because only firing alerts are visible in this panel. You can also view alerts with additional details and customize the display of Alerts in the OpenShift Container Platform. For more information, see Managing alerts . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/monitoring_openshift_data_foundation/alerts |
6.5. Role-Based Credential Map Identity Login Module | 6.5. Role-Based Credential Map Identity Login Module Warning RoleBasedCredentialMap is now deprecated. In some cases, access to data sources is defined by roles, and users are assigned these roles, taking on the privileges that come with having them. JBoss Data Virtualization provides a login module called RoleBasedCredentialMap for this purpose. An administrator can define a role-based authentication module where, given the role of the user from the primary login module, this module will hold a credential for that role. Each role is associated with a set of credentials. If a user has multiple roles, the first role that has the required credential will be used. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/role-based_credential_map_identity_login_module1 |
2.5. XA Transactions | 2.5. XA Transactions If the requesting application can participate in XA transactions, then your Connection object must override the getXAResource() method and provide the XAResource object for the application. To participate in crash recovery you must also extend the BasicResourceAdapter class and implement the public XAResource[] getXAResources(ActivationSpec[] specs) method. Red Hat JBoss Data Virtualization can make XA-capable resource adapters participate in distributed transactions. If they are not XA-capable, the datasource can participate in distributed queries but not distributed transactions. Transaction semantics are determined by how you configured "connection-factory" in a "resource-adapter" (that is, jta=true/false). | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/xa_transactions |
19.2. Configuring Near Caches | 19.2. Configuring Near Caches Near caching can be enabled and disabled via configuration without making any changes to the Hot Rod Client application. To enable near caching, configure the near caching mode (Eager or Lazy) on the client and optionally specify the number of entries to be kept in the cache. Near cache mode is configured using the NearCacheMode enumeration. The following example demonstrates how to configure Lazy near cache mode. Example 19.1. Configuring Lazy Near Cache Mode The following example demonstrates how to configure Eager near cache mode. Example 19.2. Configuring Eager Near Cache Mode Note Near cache size is unlimited by default but it can be changed to set a maximum size in terms of number of entries for the near cache. When the maximum size is reached, near cached entries are evicted using a least-recently-used (LRU) algorithm. Example 19.3. Configuring Near Cache Maximum Size Here, 100 is the maximum number of entries to keep in the near cache. Note This configuration also results in a disabled near cache mode when no mode is specified. 23155%2C+Developer+Guide-6.630-06-2017+15%3A00%3A55JBoss+Data+Grid+6Documentation6.6.1 Report a bug | [
"import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.NearCacheMode; ... ConfigurationBuilder lazy = new ConfigurationBuilder(); lazy.nearCache().mode(NearCacheMode.LAZY).maxEntries(10);",
"import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.NearCacheMode; ConfigurationBuilder eager = new ConfigurationBuilder(); eager.nearCache().mode(NearCacheMode.EAGER)..maxEntries(10);",
"import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.nearCache().maxEntries(100);"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/configuring_near_caches |
Chapter 9. Detecting Dead Connections | Chapter 9. Detecting Dead Connections Sometimes clients stop unexpectedly and do not have a chance to clean up their resources. If this occurs, it can leave resources in a faulty state and result in the broker running out of memory or other system resources. The broker detects that a client's connection was not properly shut down at garbage collection time. The connection is then closed and a message similar to the one below is written to the log. The log captures the exact line of code where the client session was instantiated. This enables you to identify the error and correct it. 1 The line in the client code where the connection was instantiated. 9.1. Connection Time-To-Live Because the network connection between the client and the server can fail and then come back online, allowing a client to reconnect, AMQ Broker waits to clean up inactive server-side resources. This wait period is called a time-to-live (TTL). The default TTL for a network-based connection is 60000 milliseconds (1 minute). The default TTL on an in-VM connection is -1 , which means the broker never times out the connection on the broker side. Configuring Time-To-Live on the Broker If you do not want clients to specify their own connection TTL, you can set a global value on the broker side. This can be done by specifying the connection-ttl-override element in the broker configuration. The logic to check connections for TTL violations runs periodically on the broker, as determined by the connection-ttl-check-interval element. Procedure Edit <broker_instance_dir> /etc/broker.xml by adding the connection-ttl-override configuration element and providing a value for the time-to-live, as in the example below. <configuration> <core> ... <connection-ttl-override>30000</connection-ttl-override> 1 <connection-ttl-check-interval>1000</connection-ttl-check-interval> 2 ... </core> </configuration> 1 The global TTL for all connections is set to 30000 milliseconds. The default value is -1 , which allows clients to set their own TTL. 2 The interval between checks for dead connections is set to 1000 milliseconds. By default, the checks are done every 2000 milliseconds. 9.2. Disabling Asynchronous Connection Execution Most packets received on the broker side are executed on the remoting thread. These packets represent short-running operations and are always executed on the remoting thread for performance reasons. However, some packet types are executed using a thread pool instead of the remoting thread, which adds a little network latency. The packet types that use the thread pool are implemented within the Java classes listed below. The classes are all found in the package org.apache.actiinvemq.artemis.core.protocol.core.impl.wireformat . RollbackMessage SessionCloseMessage SessionCommitMessage SessionXACommitMessage SessionXAPrepareMessage SessionXARollbackMessage Procedure To disable asynchronous connection execution, add the async-connection-execution-enabled configuration element to <broker_instance_dir> /etc/broker.xml and set it to false , as in the example below. The default value is true . <configuration> <core> ... <async-connection-execution-enabled>false</async-connection-execution-enabled> ... </core> </configuration> Additional resources To learn how to configure the AMQ Core Protocol JMS client to detect dead connections, see Detecting dead connections in the AMQ Core Protocol JMS documentation. To learn how to configure a connection time-to-live in the AMQ Core Protocol JMS client, see Configuring time-to-live in the AMQ Core Protocol JMS documentation. | [
"[Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] I'm closing a JMS Conection you left open. Please make sure you close all connections explicitly before let ting them go out of scope! [Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] The session you didn't close was created here: java.lang.Exception at org.apache.activemq.artemis.core.client.impl.DelegatingSession.<init>(DelegatingSession.java:83) at org.acme.yourproject.YourClass (YourClass.java:666) 1",
"<configuration> <core> <connection-ttl-override>30000</connection-ttl-override> 1 <connection-ttl-check-interval>1000</connection-ttl-check-interval> 2 </core> </configuration>",
"<configuration> <core> <async-connection-execution-enabled>false</async-connection-execution-enabled> </core> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/dead_connections |
Chapter 7. Users and Permissions | Chapter 7. Users and Permissions Table 7.1. Users and Permissions Subcommand Description and tasks user org Create a user: Add a role to a user: user-group Create a user group: Add a role to a user group: role Create a role: filter Create a filter and add it to a role: | [
"hammer user create --login user_name --mail user_mail --auth-source-id 1 --organization-ids org_ID1,org_ID2,",
"hammer user add-role --id user_id --role role_name",
"hammer user-group create --name ug_name",
"hammer user-group add-role --id ug_id --role role_name",
"hammer role create --name role_name",
"hammer filter create --role role_name --permission-ids perm_ID1,perm_ID2,"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cheat_sheet/users_and_permissions |
Chapter 11. DeploymentLog [apps.openshift.io/v1] | Chapter 11. DeploymentLog [apps.openshift.io/v1] Description DeploymentLog represents the logs for a deployment Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 11.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/log GET : read log of the specified DeploymentConfig 11.2.1. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/log Table 11.1. Global path parameters Parameter Type Description name string name of the DeploymentLog HTTP method GET Description read log of the specified DeploymentConfig Table 11.2. HTTP responses HTTP code Reponse body 200 - OK DeploymentLog schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/workloads_apis/deploymentlog-apps-openshift-io-v1 |
Chapter 1. Overview of images | Chapter 1. Overview of images 1.1. Understanding containers, images, and image streams Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name. 1.2. Images Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images . An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image. You can use the podman or docker CLI directly to build images, but OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images. Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as fd44297e2ddb050ec4f... , which is usually shortened to 12 characters, such as fd44297e2ddb . You can create , manage , and use container images. 1.3. Image registry An image registry is a content server that can store and serve container images. For example: registry.redhat.io A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at registry.redhat.io for subscribers. OpenShift Container Platform can also supply its own internal registry for managing custom container images. 1.4. Image repository An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository: docker.io/openshift/jenkins-2-centos7 1.5. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 1.6. Image IDs An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example: docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324 1.7. Containers The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations. Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers. Tools such as podman can be used to replace docker command-line tools for running and managing containers directly. Using podman , you can experiment with containers separately from OpenShift Container Platform. 1.8. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. You can manage image streams, use image streams with Kubernetes resources , and trigger updates on image stream updates . 1.9. Image stream tags An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag. 1.10. Image stream images An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier. 1.11. Image stream triggers An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those. 1.12. How you can use the Cluster Samples Operator During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the openshift namespace. As a cluster administrator, you can use the Cluster Samples Operator to: Configure the Operator . Use the Operator with an alternate registry . 1.13. About templates A template is a definition of an object to be replicated. You can use templates to build and deploy configurations. 1.14. How you can use Ruby on Rails As a developer, you can use Ruby on Rails to: Write your application: Set up a database. Create a welcome page. Configure your application for OpenShift Container Platform. Store your application in Git. Deploy your application in OpenShift Container Platform: Create the database service. Create the frontend service. Create a route for your application. | [
"registry.redhat.io",
"docker.io/openshift/jenkins-2-centos7",
"registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2",
"docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/images/overview-of-images |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_rhel_9/proc_providing-feedback-on-red-hat-documentation_getting-started-with-dotnet-on-rhel-9 |
Chapter 6. EgressQoS [k8s.ovn.org/v1] | Chapter 6. EgressQoS [k8s.ovn.org/v1] Description EgressQoS is a CRD that allows the user to define a DSCP value for pods egress traffic on its namespace to specified CIDRs. Traffic from these pods will be checked against each EgressQoSRule in the namespace's EgressQoS, and if there is a match the traffic is marked with the relevant DSCP value. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object EgressQoSSpec defines the desired state of EgressQoS status object EgressQoSStatus defines the observed state of EgressQoS 6.1.1. .spec Description EgressQoSSpec defines the desired state of EgressQoS Type object Required egress Property Type Description egress array a collection of Egress QoS rule objects egress[] object 6.1.2. .spec.egress Description a collection of Egress QoS rule objects Type array 6.1.3. .spec.egress[] Description Type object Required dscp Property Type Description dscp integer DSCP marking value for matching pods' traffic. dstCIDR string DstCIDR specifies the destination's CIDR. Only traffic heading to this CIDR will be marked with the DSCP value. This field is optional, and in case it is not set the rule is applied to all egress traffic regardless of the destination. podSelector object PodSelector applies the QoS rule only to the pods in the namespace whose label matches this definition. This field is optional, and in case it is not set results in the rule being applied to all pods in the namespace. 6.1.4. .spec.egress[].podSelector Description PodSelector applies the QoS rule only to the pods in the namespace whose label matches this definition. This field is optional, and in case it is not set results in the rule being applied to all pods in the namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.5. .spec.egress[].podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.6. .spec.egress[].podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.7. .status Description EgressQoSStatus defines the observed state of EgressQoS Type object 6.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressqoses GET : list objects of kind EgressQoS /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses DELETE : delete collection of EgressQoS GET : list objects of kind EgressQoS POST : create an EgressQoS /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name} DELETE : delete an EgressQoS GET : read the specified EgressQoS PATCH : partially update the specified EgressQoS PUT : replace the specified EgressQoS /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name}/status GET : read status of the specified EgressQoS PATCH : partially update status of the specified EgressQoS PUT : replace status of the specified EgressQoS 6.2.1. /apis/k8s.ovn.org/v1/egressqoses Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind EgressQoS Table 6.2. HTTP responses HTTP code Reponse body 200 - OK EgressQoSList schema 401 - Unauthorized Empty 6.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EgressQoS Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressQoS Table 6.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK EgressQoSList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressQoS Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.10. Body parameters Parameter Type Description body EgressQoS schema Table 6.11. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 201 - Created EgressQoS schema 202 - Accepted EgressQoS schema 401 - Unauthorized Empty 6.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the EgressQoS namespace string object name and auth scope, such as for teams and projects Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EgressQoS Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressQoS Table 6.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.18. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressQoS Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.20. Body parameters Parameter Type Description body Patch schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressQoS Table 6.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.23. Body parameters Parameter Type Description body EgressQoS schema Table 6.24. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 201 - Created EgressQoS schema 401 - Unauthorized Empty 6.2.4. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressqoses/{name}/status Table 6.25. Global path parameters Parameter Type Description name string name of the EgressQoS namespace string object name and auth scope, such as for teams and projects Table 6.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified EgressQoS Table 6.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.28. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressQoS Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.30. Body parameters Parameter Type Description body Patch schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressQoS Table 6.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.33. Body parameters Parameter Type Description body EgressQoS schema Table 6.34. HTTP responses HTTP code Reponse body 200 - OK EgressQoS schema 201 - Created EgressQoS schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/egressqos-k8s-ovn-org-v1 |
Chapter 20. Atomic Host and Containers | Chapter 20. Atomic Host and Containers Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/atomic_host_and_containers |
Chapter 2. Starting and Stopping Apache Karaf | Chapter 2. Starting and Stopping Apache Karaf Abstract Apache Karaf provides simple command-line tools for starting and stopping the server. 2.1. Starting Apache Karaf The default way to deploy the Apache Karaf runtime is to deploy it as a standalone server with an active console. You can also deploy the runtime as a background process without a console. 2.1.1. Setting up your environment You can start the Karaf runtime directly from the bin subdirectory of your installation, without modifying your environment. However, if you want to start it in a different folder you need to add the bin directory of your Karaf installation to the PATH environment variable, as follows: Windows Linux/UNIX 2.1.2. Launching the runtime in console mode If you are launching the Karaf runtime from the installation directory use the following command: Windows Linux/UNIX If Karaf starts up correctly you should see the following on the console: Note Since version Fuse 6.2.1, launching in console mode creates two processes: the parent process ./bin/karaf , which is executing the Karaf console; and the child process, which is executing the Karaf server in a java JVM. The shutdown behaviour remains the same as before, however. That is, you can shut down the server from the console using either Ctrl-D or osgi:shutdown , which kills both processes. 2.1.3. Launching the runtime in server mode Launching in server mode runs Apache Karaf in the background, without a local console. You would then connect to the running instance using a remote console. See Section 17.2, "Connecting and Disconnecting Remotely" for details. To launch Karaf in server mode, run the following Windows Linux/UNIX 2.1.4. Launching the runtime in client mode In production environments you might want to have a runtime instance accessible using only a local console. In other words, you cannot connect to the runtime remotely through the SSH console port. You can do this by launching the runtime in client mode, using the following command: Windows Linux/UNIX Note Launching in client mode suppresses only the SSH console port (usually port 8101). Other Karaf server ports (for example, the JMX management RMI ports) are opened as normal. 2.1.5. Running Fuse in debug mode Running Fuse in debug mode helps identify and resolve errors more efficiently. This option is disabled by default. When enabled, Fuse starts a JDWP socket on port 5005 . You have three approaches to run Fuse in debug mode . Section 2.1.5.1, "Use the Karaf environment variable" Section 2.1.5.2, "Run Fuse debug" Section 2.1.5.3, "Run Fuse debugs" 2.1.5.1. Use the Karaf environment variable This approach enables the KARAF_DEBUG environment variable ( =1 ), and then you start the container. 2.1.5.2. Run Fuse debug This approach runs debug where the suspend option is set to n (no). 2.1.5.3. Run Fuse debugs This approach runs debugs where the suspend option is set to y (yes). Note Setting suspend to yes causes the JVM to pause just before running main() until a debugger is attached and then it resumes execution. 2.2. Stopping Apache Karaf You can stop an instance of Apache Karaf either from within a console, or using a stop script. 2.2.1. Stopping an instance from a local console If you launched the Karaf instance by running fuse or fuse client , you can stop it by doing one of the following at the karaf> prompt: Type shutdown Press Ctrl + D 2.2.2. Stopping an instance running in server mode You can stop a locally running Karaf instance (root container), by invoking the stop(.bat) from the InstallDir/bin directory, as follows: Windows Linux/UNIX The shutdown mechanism invoked by the Karaf stop script is similar to the shutdown mechanism implemented in Apache Tomcat. The Karaf server opens a dedicated shutdown port ( not the same as the SSH port) to receive the shutdown notification. By default, the shutdown port is chosen randomly, but you can configure it to use a specific port if you prefer. You can optionally customize the shutdown port by setting the following properties in the InstallDir/etc/config.properties file: karaf.shutdown.port Specifies the TCP port to use as the shutdown port. Setting this property to -1 disables the port. Default is 0 (for a random port). Note If you wanted to use the bin/stop script to shut down the Karaf server running on a remote host, you would need to set this property equal to the remote host's shutdown port. But beware that this setting also affects the Karaf server located on the same host as the etc/config.properties file. karaf.shutdown.host Specifies the hostname to which the shutdown port is bound. This setting could be useful on a multi-homed host. Defaults to localhost . Note If you wanted to use the bin/stop script to shut down the Karaf server running on a remote host, you would need to set this property to the hostname (or IP address) of the remote host. But beware that this setting also affects the Karaf server located on the same host as the etc/config.properties file. karaf.shutdown.port.file After the Karaf instance starts up, it writes the current shutdown port to the file specified by this property. The stop script reads the file specified by this property to discover the value of the current shutdown port. Defaults to USD{karaf.data}/port . karaf.shutdown.command Specifies the UUID value that must be sent to the shutdown port in order to trigger shutdown. This provides an elementary level of security, as long as the UUID value is kept a secret. For example, the etc/config.properties file could be read-protected to prevent this value from being read by ordinary users. When Apache Karaf is started for the very first time, a random UUID value is automatically generated and this setting is written to the end of the etc/config.properties file. Alternatively, if karaf.shutdown.command is already set, the Karaf server uses the pre-existing UUID value (which enables you to customize the UUID setting, if required). Note If you wanted to use the bin/stop script to shut down the Karaf server running on a remote host, you would need to set this property to be equal to the value of the remote host's karaf.shutdown.command . But beware that this setting also affects the Karaf server located on the same host as the etc/config.properties file. 2.2.3. Stopping a remote instance You can stop a container instance running on a remote host as described in Section 17.3, "Stopping a Remote Container" . | [
"set PATH=%PATH%;InstallDir\\bin",
"export PATH=USDPATH,InstallDir/bin`",
"bin\\fuse.bat",
"./bin/fuse",
"Red Hat Fuse starting up. Press Enter to open the shell now 100% [========================================================================] Karaf started in 8s. Bundle stats: 220 active, 220 total ____ _ _ _ _ _____ | _ \\ ___ __| | | | | | __ _| |_ | ___| _ ___ ___ | |_) / _ \\/ _` | | |_| |/ _` | __| | |_ | | | / __|/ _ | _ < __/ (_| | | _ | (_| | |_ | _|| |_| \\__ \\ __/ |_| \\_\\___|\\__,_| |_| |_|\\__,_|\\__| |_| \\__,_|___/___| Fuse (7.x.x.fuse-xxxxxx-redhat-xxxxx) http://www.redhat.com/products/jbossenterprisemiddleware/fuse/ Hit '<tab>' for a list of available commands and '[cmd] --help' for help on a specific command. Open a browser to http://localhost:8181/hawtio to access the management console Hit '<ctrl-d>' or 'shutdown' to shutdown Red Hat Fuse. karaf@root()>",
"bin\\start.bat",
"./bin/start",
"bin\\fuse.bat client",
"./bin/fuse client",
"export KARAF_DEBUG=1 bin/start",
"bin/fuse debug",
"bin/fuse debugs",
"bin\\stop.bat",
"./bin/stop"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/ESBRuntimeStartStop |
13.2.2. Setting up the sssd.conf File | 13.2.2. Setting up the sssd.conf File SSSD services and domains are configured in a .conf file. By default, this is /etc/sssd/sssd.conf - although that file must be created and configured manually, since SSSD is not configured after installation. 13.2.2.1. Creating the sssd.conf File There are three parts of the SSSD configuration file: [sssd] , for general SSSD process and operational configuration; this basically lists the configured services, domains, and configuration parameters for each [service_name] , for configuration options for each supported system service, as described in Section 13.2.4, "SSSD and System Services" [domain_type/DOMAIN_NAME] , for configuration options for each configured identity provider Important While services are optional, at least one identity provider domain must be configured before the SSSD service can be started. Example 13.1. Simple sssd.conf File [sssd] domains = LOCAL services = nss config_file_version = 2 [nss] filter_groups = root filter_users = root [domain/LOCAL] id_provider = local auth_provider = local access_provider = permit The [sssd] section has three important parameters: domains lists all of the domains, configured in the sssd.conf , which SSSD uses as identity providers. If a domain is not listed in the domains key, it is not used by SSSD, even if it has a configuration section. services lists all of the system services, configured in the sssd.conf , which use SSSD; when SSSD starts, the corresponding SSSD service is started for each configured system service. If a service is not listed in the services key, it is not used by SSSD, even if it has a configuration section. config_file_version sets the version of the configuration file to set file format expectations. This is version 2, for all recent SSSD versions. Note Even if a service or domain is configured in the sssd.conf file, SSSD does not interact with that service or domain unless it is listed in the services or domains parameters, respectively, in the [sssd] section. Other configuration parameters are listed in the sssd.conf man page. Each service and domain parameter is described in its respective configuration section in this chapter and in their man pages. 13.2.2.2. Using a Custom Configuration File By default, the sssd process assumes that the configuration file is /etc/sssd/sssd.conf . An alternative file can be passed to SSSD by using the -c option with the sssd command: | [
"[sssd] domains = LOCAL services = nss config_file_version = 2 [nss] filter_groups = root filter_users = root [domain/LOCAL] id_provider = local auth_provider = local access_provider = permit",
"~]# sssd -c /etc/sssd/customfile.conf --daemon"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/about-sssd-conf |
3.2. Performance Tuning with tuned and tuned-adm | 3.2. Performance Tuning with tuned and tuned-adm The tuned tuning service can adapt the operating system to perform better under certain workloads by setting a tuning profile. The tuned-adm command-line tool allows users to switch between different tuning profiles. tuned Profiles Overview Several pre-defined profiles are included for common use cases, but tuned also enables you to define custom profiles, which can be either based on one of the pre-defined profiles, or defined from scratch. In Red Hat Enterprise Linux 7, the default profile is throughput-performance . The profiles provided with tuned are divided into two categories: power-saving profiles, and performance-boosting profiles. The performance-boosting profiles include profiles focus on the following aspects: low latency for storage and network high throughput for storage and network virtual machine performance virtualization host performance tuned Boot Loader plug-in You can use the tuned Bootloader plug-in to add parameters to the kernel (boot or dracut) command line. Note that only the GRUB 2 boot loader is supported and a reboot is required to apply profile changes. For example, to add the quiet parameter to a tuned profile, include the following lines in the tuned.conf file: Switching to another profile or manually stopping the tuned service removes the additional parameters. If you shut down or reboot the system, the kernel parameters persist in the grub.cfg file. Environment Variables and Expanding tuned Built-In Functions If you run tuned-adm profile profile_name and then grub2-mkconfig -o profile_path after updating GRUB 2 configuration, you can use Bash environment variables, which are expanded after running grub2-mkconfig . For example, the following environment variable is expanded to nfsroot=/root : You can use tuned variables as an alternative to environment variables. In the following example, USD{isolated_cores} expands to 1,2 , so the kernel boots with the isolcpus=1,2 parameter: In the following example, USD{non_isolated_cores} expands to 0,3-5 , and the cpulist_invert built-in function is called with the 0,3-5 arguments: The cpulist_invert function inverts the list of CPUs. For a 6-CPU machine, the inversion is 1,2 , and the kernel boots with the isolcpus=1,2 command-line parameter. Using tuned environment variables reduces the amount of necessary typing. You can also use various built-in functions together with tuned variables. If the built-in functions do not satisfy your needs, you can create custom functions in Python and add them to tuned in the form of plug-ins. Variables and built-in functions are expanded at run time when the tuned profile is activated. The variables can be specified in a separate file. You can, for example, add the following lines to tuned.conf : If you add isolated_cores=1,2 to the /etc/tuned/ my-variables.conf file, the kernel boots with the isolcpus=1,2 parameter. Modifying Default System tuned Profiles There are two ways of modifying the default system tuned profiles. You can either create a new tuned profile directory, or copy the directory of a system profile and edit the profile as needed. Procedure 3.1. Creating a New Tuned Profile Directory In /etc/tuned/ , create a new directory named the same as the profile you want to create: /etc/tuned/ my_profile_name / . In the new directory, create a file named tuned.conf , and include the following lines at the top: Include your profile modifications. For example, to use the settings from the throughput-performance profile with the value of vm.swappiness set to 5, instead of default 10, include the following lines: To activate the profile, run: Creating a directory with a new tuned.conf file enables you to keep all your profile modifications after system tuned profiles are updated. Alternatively, copy the directory with a system profile from /user/lib/tuned/ to /etc/tuned/ . For example: Then, edit the profile in /etc/tuned according to your needs. Note that if there are two profiles of the same name, the profile located in /etc/tuned/ is loaded. The disadvantage of this approach is that if a system profile is updated after a tuned upgrade, the changes will not be reflected in the now-outdated modified version. Resources For more information, see Section A.4, "tuned" and Section A.5, "tuned-adm" . For detailed information on using tuned and tuned-adm , see the tuned (8) and tuned-adm (1) manual pages. | [
"[bootloader] cmdline=quiet",
"[bootloader] cmdline=\"nfsroot=USDHOME\"",
"[variables] isolated_cores=1,2 [bootloader] cmdline=isolcpus=USD{isolated_cores}",
"[variables] non_isolated_cores=0,3-5 [bootloader] cmdline=isolcpus=USD{f:cpulist_invert:USD{non_isolated_cores}}",
"[variables] include=/etc/tuned/ my-variables.conf [bootloader] cmdline=isolcpus=USD{isolated_cores}",
"[main] include= profile_name",
"[main] include=throughput-performance [sysctl] vm.swappiness=5",
"tuned-adm profile my_profile_name",
"cp -r /usr/lib/tuned/throughput-performance /etc/tuned"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-tuned_and_tuned_adm |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/scaling_storage/making-open-source-more-inclusive |
10.5.2. Patching the System | 10.5.2. Patching the System Patching affected systems is a more dangerous course of action and should be undertaken with great caution. The problem with patching a system instead of reinstalling is determining whether or not a given system is cleansed of trojans, security holes, and corrupted data. Most rootkits (programs or packages that a cracker uses to gain root access to a system), trojan system commands, and shell environments are designed to hide malicious activities from cursory audits. If the patch approach is taken, only trusted binaries should be used (for example, from a mounted, read-only CD-ROM). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-response-restore-patch |
2.2. A Three-Tier keepalived Load Balancer Configuration | 2.2. A Three-Tier keepalived Load Balancer Configuration Figure 2.2, "A Three-Tier Load Balancer Configuration" shows a typical three-tier Keepalived Load Balancer topology. In this example, the active LVS router routes the requests from the Internet to the pool of real servers. Each of the real servers then accesses a shared data source over the network. Figure 2.2. A Three-Tier Load Balancer Configuration This configuration is ideal for busy FTP servers, where accessible data is stored on a central, highly available server and accessed by each real server by means of an exported NFS directory or Samba share. This topology is also recommended for websites that access a central, highly available database for transactions. Additionally, using an active-active configuration with the Load Balancer, administrators can configure one high-availability cluster to serve both of these roles simultaneously. The third tier in the above example does not have to use the Load Balancer, but failing to use a highly available solution would introduce a critical single point of failure. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-cm-vsa |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_creator_guide/making-open-source-more-inclusive |
Chapter 3. Preparing the overcloud role for hyperconverged nodes | Chapter 3. Preparing the overcloud role for hyperconverged nodes To use hyperconverged nodes, you need to define a role for it. Red Hat OpenStack Platform (RHOSP) provides the predefined role ComputeHCI for hyperconverged nodes. This role colocates the Compute and Ceph object storage daemon (OSD) services, allowing you to deploy them together on the same hyperconverged node. To use the ComputeHCI role, you need to generate a custom roles_data.yaml file that includes it, along with all the other roles you are using in your deployment. The following procedure details how to use and configure this predefined role. Procedure Generate a custom roles_data.yaml file that includes ComputeHCI , along with other roles you intend to use for the overcloud: For more information about custom roles, see Composable Services and Custom Roles and Examining the roles_data file . Create a new heat template named ports.yaml in ~/templates . Configure port assignments for the ComputeHCI role by adding the following configuration to the ports.yaml file: Replace <ext_port_file> with the name of the external port file. Set to "external" if you are using DVR, otherwise set to "noop". For details on DVR, see Configure Distributed Virtual Routing (DVR) . Replace <storage_mgmt_file> with the name of the storage management file. Set to one of the following values: Value Description storage_mgmt Use if you do not want to select from a pool of IPs, and your environment does not use IPv6 addresses. storage_mgmt_from_pool Use if you want the ComputeHCI role to select from a pool of IPs. storage_mgmt_v6 Use if your environment uses IPv6 addresses. storage_mgmt_from_pool_v6 Use if you want the ComputeHCI role to select from a pool of IPv6 addresses For more information, see Basic network isolation . Create a flavor for the ComputeHCI role: Configure the flavor properties: Map the flavor to a new profile: Retrieve a list of your nodes to identify their UUIDs: Tag nodes into the new profile: For more information, see Manually Tagging the Nodes and Assigning Nodes and Flavors to Roles . Add the following configuration to the node-info.yaml file to associate the computeHCI flavor with the ComputeHCI role: | [
"openstack overcloud roles generate -o /home/stack/roles_data.yaml Controller ComputeHCI Compute CephStorage",
"resource_registry: OS::TripleO::ComputeHCI::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/<ext_port_file>.yaml OS::TripleO::ComputeHCI::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml OS::TripleO::ComputeHCI::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage.yaml OS::TripleO::ComputeHCI::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/<storage_mgmt_file>.yaml",
"openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 computeHCI",
"openstack flavor set --property \"cpu_arch\"=\"x86_64\" --property \"capabilities:boot_option\"=\"local\" --property \"resources:CUSTOM_BAREMETAL\"=\"1\" --property \"resources:DISK_GB\"=\"0\" --property \"resources:MEMORY_MB\"=\"0\" --property \"resources:VCPU\"=\"0\" computeHCI",
"openstack flavor set --property \"capabilities:profile\"=\"computeHCI\" computeHCI",
"openstack baremetal node list",
"openstack baremetal node set --property capabilities='profile:computeHCI,boot_option:local' <UUID>",
"parameter_defaults: OvercloudComputeHCIFlavor: computeHCI ComputeHCICount: 3"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/hyperconverged_infrastructure_guide/prepare-overcloud-role |
Chapter 4. Optimizing MTA performance | Chapter 4. Optimizing MTA performance MTA performance depends on a number of factors, including hardware configuration, the number and types of files in the application, the size and number of applications to be evaluated, and whether the application contains source or compiled code. For example, a file that is larger than 10 MB may need a lot of time to process. In general, MTA spends about 40% of the time decompiling classes, 40% of the time executing rules, and the remainder of the time processing other tasks and generating reports. This section describes what you can do to improve the performance of MTA. 4.1. Deploying and running the application Try these suggestions first before upgrading hardware. If possible, run MTA against the source code instead of the archives. This eliminates the need to decompile additional JARs and archives. Increase your ulimit when analyzing large applications. See this Red Hat Knowledgebase article for instructions on how to do this for Red Hat Enterprise Linux. If you have access to a server that has better resources than your laptop or desktop machine, you may want to consider running MTA on that server. 4.2. Upgrading hardware If the application and command-line suggestions above do not improve performance, you may need to upgrade your hardware. If you have access to a server that has better resources than your laptop/desktop, then you may want to consider running MTA on that server. Very large applications that require decompilation have large memory requirements. 8 GB RAM is recommended. This allows 3 - 4 GB RAM for use by the JVM. An upgrade from a single or dual-core to a quad-core CPU processor provides better performance. Disk space and fragmentation can impact performance. A fast disk, especially a solid-state drive (SSD), with greater than 4 GB of defragmented disk space should improve performance. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/cli_guide/optimize-performance_cli-guide |
probe::sunrpc.clnt.call_async | probe::sunrpc.clnt.call_async Name probe::sunrpc.clnt.call_async - Make an asynchronous RPC call Synopsis sunrpc.clnt.call_async Values progname the RPC program name prot the IP protocol number proc the procedure number in this RPC call procname the procedure name in this RPC call vers the RPC program version number flags flags servername the server machine name xid current transmission id port the port number prog the RPC program number dead whether this client is abandoned | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-call-async |
Chapter 2. Performance Monitoring Tools | Chapter 2. Performance Monitoring Tools This chapter describes tools used to monitor guest virtual machine environments. 2.1. perf kvm You can use the perf command with the kvm option to collect and analyze guest operating system statistics from the host. The perf package provides the perf command. It is installed by running the following command: In order to use perf kvm in the host, you must have access to the /proc/modules and /proc/kallsyms files from the guest. See Procedure 2.1, "Copying /proc files from guest to host" to transfer the files into the host and run reports on the files. Procedure 2.1. Copying /proc files from guest to host Important If you directly copy the required files (for instance, using scp ) you will only copy files of zero length. This procedure describes how to first save the files in the guest to a temporary location (with the cat command), and then copy them to the host for use by perf kvm . Log in to the guest and save files Log in to the guest and save /proc/modules and /proc/kallsyms to a temporary location, /tmp : Copy the temporary files to the host Once you have logged off from the guest, run the following example scp commands to copy the saved files to the host. You should substitute your host name and TCP port if they are different: You now have two files from the guest ( guest-kallsyms and guest-modules ) on the host, ready for use by perf kvm . Recording and reporting events with perf kvm Using the files obtained in the steps, recording and reporting of events in the guest, the host, or both is now possible. Run the following example command: Note If both --host and --guest are used in the command, output will be stored in perf.data.kvm . If only --host is used, the file will be named perf.data.host . Similarly, if only --guest is used, the file will be named perf.data.guest . Pressing Ctrl-C stops recording. Reporting events The following example command uses the file obtained by the recording process, and redirects the output into a new file, analyze . View the contents of the analyze file to examine the recorded events: # cat analyze # Events: 7K cycles # # Overhead Command Shared Object Symbol # ........ ............ ................. ......................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...] | [
"yum install perf",
"cat /proc/modules > /tmp/modules cat /proc/kallsyms > /tmp/kallsyms",
"scp root@GuestMachine:/tmp/kallsyms guest-kallsyms scp root@GuestMachine:/tmp/modules guest-modules",
"perf kvm --host --guest --guestkallsyms=guest-kallsyms --guestmodules=guest-modules record -a -o perf.data",
"perf kvm --host --guest --guestmodules=guest-modules report -i perf.data.kvm --force > analyze",
"cat analyze Events: 7K cycles # Overhead Command Shared Object Symbol ........ ............ ................. ...................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-monitoring_tools |
Chapter 4. Message-Driven Beans | Chapter 4. Message-Driven Beans 4.1. Message-Driven Beans Message-driven Beans (MDBs) provide an event driven model for application development. The methods of MDBs are not injected into or invoked from client code but are triggered by the receipt of messages from a messaging service such as a Jakarta Messaging server. The Jakarta EE specification requires that Jakarta Messaging is supported but other messaging systems can be supported as well. MDBs are a special kind of stateless session beans. They implement a method called onMessage(Message message) . This method is triggered when a Jakarta Messaging destination on which the MDB is listening receives a message. That is, MDBs are triggered by the receipt of messages from a Jakarta Messaging provider, unlike the stateless session beans where methods are usually called by Jakarta Enterprise Beans clients. MDB processes messages asynchronously. By default each MDB can have up to 16 sessions, where each session processes a message. There are no message order guarantees. In order to achieve message ordering, it is necessary to limit the session pool for the MDB to 1 . Example: Management CLI Commands to Set Session Pool to 1 : 4.2. Message-Driven Beans Controlled Delivery JBoss EAP provides three attributes that control active reception of messages on a specific MDB: Delivery Active Delivery Groups Clustered Singleton MDBs 4.2.1. Delivery Active The delivery active configuration of the message-driven beans (MDB) indicates whether the MDB is receiving messages or not. If an MDB is not receiving messages, then the messages will be saved in the queue or topic according to the topic or queue rules. You can configure the active attribute of the delivery-group using XML or annotations, and you can change its value after deployment using the management CLI. By default, the active attribute is activated and delivery of messages occurs as soon as the MDB is deployed. Configuring Delivery Active in the jboss-ejb3.xml File In the jboss-ejb3.xml file, set the value of active to false to indicate that the MDB will not be receiving messages as soon as it is deployed: <?xml version="1.1" encoding="UTF-8"?> <jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:d="urn:delivery-active:1.1" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd" version="3.1" impl-version="2.0"> <assembly-descriptor> <d:delivery> <ejb-name>HelloWorldQueueMDB</ejb-name> <d:active>false</d:active> </d:delivery> </assembly-descriptor> </jboss:ejb-jar> If you want to apply the active value to all MDBs in your application, you can use a wildcard * in place of the ejb-name . Configuring Delivery Active Using Annotations You can also use the org.jboss.ejb3.annotation.DeliveryActive annotation. For example: @MessageDriven(name = "HelloWorldMDB", activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/HELLOWORLDMDBQueue"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge") }) @DeliveryActive(false) public class HelloWorldMDB implements MessageListener { public void onMessage(Message rcvMessage) { // ... } } If you use Maven to build your project, make sure you add the following dependency to the pom.xml file of your project: <dependency> <groupId>org.jboss.ejb3</groupId> <artifactId>jboss-ejb3-ext-api</artifactId> <version>2.2.0.Final</version> </dependency> Configuring Delivery Active Using the Management CLI You can configure the active attribute of the delivery-group after deployment using the management CLI. These management operations dynamically change the value of the active attribute, enabling or disabling delivery for the MDB. This method of changing the delivery active value does not persist if you restart the server. At runtime, connect to the instance you want to manage, then enter the path of the MDB for which you want to manage the delivery. For example: Navigate to the instance you want to manage: To stop the delivery to the MDB: To start the delivery to the MDB: View the MDB Delivery Active Status You can view the current delivery active status of any MDB using the management console: Select the Runtime tab and select the appropriate server. Click EJB and select the child resource, for example HelloWorldQueueMDB . Result You see the status as Delivery Active: true or Delivery Active: false . 4.2.2. Delivery Groups Delivery groups provide a way to manage the delivery-active state for a group of MDBs. An MDB can belong to one or more delivery groups. Message delivery is enabled only when all the delivery groups that an MDB belongs to are active. For a clustered singleton MDB, message delivery is active only in the singleton node of the cluster and only if all the delivery groups associated with the MDB are active. You can add a delivery group to the ejb3 subsystem using either the XML configuration or the management CLI. Configuring Delivery Group in the jboss-ejb3.xml File <delivery> <ejb-name>MdbName<ejb-name> <delivery-group>passive</delivery-group> </delivery> On the server side, delivery-groups can be enabled by having their active attribute set to true , or disabled by having their active attribute set to false , as shown in the example below: <delivery-groups> <delivery-group name="group" active="true"/> </delivery-groups> Configuring Delivery Group Using the Management CLI The state of delivery-groups can be updated using the management CLI. For example: When you set the delivery active in the jboss-ejb3.xml file or using the annotation, it persists on server restart. However, when you use the management CLI to stop or start the delivery, it does not persist on server restart. Configuring Multiple Delivery Groups Using Annotations You can use the org.jboss.ejb3.annotation.DeliveryGroup annotation on each MDB class belonging to a group: @MessageDriven(name = "HelloWorldQueueMDB", activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/HELLOWORLDMDBQueue"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge") }) @DeliveryGroup("delivery-group-1") @DeliveryGroup("delivery-group-2") public class HelloWorldQueueMDB implements MessageListener { ... } 4.2.3. Clustered Singleton MDBs When an MDB is identified as a clustered singleton and is deployed in a cluster, only one node is active. This node can consume messages serially. When the server node fails, the active node from the clustered singleton MDBs starts consuming the messages. Identify an MDB as a Clustered Singleton You can use one of the following procedures to identify an MDB as a clustered singleton. Use the clustered-singleton XML element as shown in the example below: <?xml version="1.1" encoding="UTF-8"?> <jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:c="urn:clustering:1.1" xmlns:d="urn:delivery-active:1.2" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd" version="3.1" impl-version="2.0"> <assembly-descriptor> <c:clustering> <ejb-name>HelloWorldQueueMDB</ejb-name> <c:clustered-singleton>true</c:clustered-singleton> </c:clustering> <d:delivery> <ejb-name>*</ejb-name> <d:group>delivery-group-1</d:group> <d:group>delivery-group-2</d:group> </d:delivery> </assembly-descriptor> </jboss:ejb-jar> In your MDB class, use the @org.jboss.ejb3.annotation.ClusteredSingleton . This procedure requires no extra configuration at the server. You need to run the service in a clustered environment. Note You have to activate the delivery-group in the entire cluster, specifically, in all nodes of the cluster, because you do not know which node of the cluster is chosen to be the singleton master . If the server chooses a node to be singleton master , and that node does not have the required delivery-group activated, no node in the cluster receives the messages. The messaging-clustering-singleton quickstart, which ships with JBoss EAP, demonstrates the use of clustering with integrated Apache ActiveMQ Artemis. It uses the same source code as the helloworld-mdb quickstart, with a difference only in the configuration to run it as a clustered singleton. There are two Jakarta Messaging resources contained in this quickstart: A queue named HELLOWORLDMDBQueue bound in the Java Naming and Directory Interface as java:/queue/HELLOWORLDMDBQueue A topic named HELLOWORLDMDBTopic bound in the Java Naming and Directory Interface as java:/topic/HELLOWORLDMDBTopic Both contain a singleton configuration as specified in the jboss-ejb3.xml file: <c:clustering> <ejb-name>*</ejb-name> <c:clustered-singleton>true</c:clustered-singleton> </c:clustering> The wildcard asterisk * in the <ejb-name> element indicates that all the MDBs contained in the application will be clustered singleton. As a result, only one node in the cluster will have those MDBs active at a specific time. If this active node shuts down, another node in the cluster will become the active node with the MDBs, which then becomes the singleton provider. You can also find a configuration for the delivery group in the jboss-ejb3.xml file: <d:delivery> <ejb-name>HelloWorldTopicMDB</ejb-name> <d:group>my-mdb-delivery-group</d:group> </d:delivery> In this case, only one of the MDBs, HelloWorldTopicMDB , is associated with a delivery group. All the delivery groups used by an MDB must be configured in the ejb3 subsystem configuration. The delivery group can be enabled or disabled. If the delivery group is disabled in a cluster node, all the MDBs belonging to that delivery group become inactive in the respective cluster node. When using the delivery groups in a non-clustered environment, the MDB is active whenever the delivery group is enabled. If a delivery group is used in conjunction with the singleton provider, the MDB can be active in the singleton provider node only if that node has the delivery group enabled. Otherwise, the MDB will be inactive in that node, and all the other nodes of the cluster. See the README.html file included with this quickstart for detailed instructions about how to configure the server for messaging clustering and to review the code examples. For information on how to download and use the JBoss EAP quickstarts, see the Using the Quickstart Examples section in the JBoss EAP Getting Started Guide . 4.3. Create a Jakarta Messaging-based Message-Driven Bean in Red Hat CodeReady Studio This procedure shows how to add a Jakarta Messaging-based message-driven bean to a project in Red Hat CodeReady Studio. This procedure creates a Jakarta Enterprise Beans 3.x message-driven bean that uses annotations. Prerequisites You must have an existing project open in Red Hat CodeReady Studio. You must know the name and type of the Jakarta Messaging destination that the bean will be listening to. Support for Jakarta Messaging must be enabled in the JBoss EAP configuration to which this bean will be deployed. Add a Jakarta Messaging-based Message-driven Bean in Red Hat CodeReady Studio Open the Create EJB 3.x Message-Driven Bean wizard. Go to File New Other . Select EJB/Message-Driven Bean (EJB 3.x) and click the button. Figure 4.1. Create EJB 3.x Message-Driven Bean Wizard Specify class file destination details. There are three sets of details to specify for the bean class here: project, Java class, and message destination. Project: If multiple projects exist in the workspace, ensure that the correct one is selected in the Project menu. The folder where the source file for the new bean will be created is ejbModule under the selected project's directory. Only change this if you have a specific requirement. Java Class: The required fields are: Java package and Class name . It is not necessary to supply a superclass unless the business logic of your application requires it. Message Destination: These are the details you must supply for a Jakarta Messaging-based message-driven bean: Destination name , which is the queue or topic name that contains the messages that the bean will respond to. By default the JMS checkbox is selected. Do not change this. Set Destination type to Queue or Topic as required. Click the button. Enter message-driven bean specific information. The default values here are suitable for a Jakarta Messaging-based message-driven bean using container-managed transactions. Change the Transaction type to Bean if the Bean will use Bean-managed transactions. Change the Bean name if a different bean name than the class name is required. The JMS Message Listener interface will already be listed. You do not need to add or remove any interfaces unless they are specific to your application's business logic. Leave the checkboxes for creating method stubs selected. Click the Finish button. Result The message-driven bean is created with stub methods for the default constructor and the onMessage() method. A Red Hat CodeReady Studio editor window opens with the corresponding file. 4.4. Specifying a Resource Adapter in jboss-ejb3.xml for an MDB In the jboss-ejb3.xml deployment descriptor you can specify a resource adapter for an MDB to use. To specify a resource adapter in jboss-ejb3.xml for an MDB, use the following example. Example: jboss-ejb3.xml Configuration for an MDB Resource Adapter <jboss xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:jee="http://java.sun.com/xml/ns/javaee" xmlns:mdb="urn:resource-adapter-binding"> <jee:assembly-descriptor> <mdb:resource-adapter-binding> <jee:ejb-name>MyMDB</jee:ejb-name> <mdb:resource-adapter-name>MyResourceAdapter.rar</mdb:resource-adapter-name> </mdb:resource-adapter-binding> </jee:assembly-descriptor> </jboss> For a resource adapter located in an EAR, you must use the following syntax for <mdb:resource-adapter-name> : For a resource adapter that is in another EAR: <mdb:resource-adapter-name>OtherDeployment.ear#MyResourceAdapter.rar</mdb:resource-adapter-name> For a resource adapter that is in the same EAR as the MDB, you can omit the EAR name: <mdb:resource-adapter-name>#MyResourceAdapter.rar</mdb:resource-adapter-name> 4.5. Using Resource Definition Annotations in MDBs Deployed to a Cluster If you use the @JMSConnectionFactoryDefinition and @JMSDestinationDefinition annotations to create a connection factory and destination for message-driven beans, be aware that the objects are only created on the server where the MDB is deployed. They are not created on all nodes in a cluster unless the MDB is also deployed to all nodes in the cluster. Because objects configured by these annotations are only created on the server where the MDB is deployed, this affects remote Jakarta Connectors topologies where an MDB reads messages from a remote server and then sends them to a remote server. 4.6. Enable Jakarta Enterprise Beans and MDB Property Substitution in an Application Red Hat JBoss Enterprise Application Platform allows you to enable property substitution in Jakarta Enterprise Beans and MDBs using the @ActivationConfigProperty and @Resource annotations. Property substitution requires the following configuration and code changes. You must enable property substitution in the JBoss EAP server configuration file. You must define the system properties in the server configuration file or pass them as arguments when you start the JBoss EAP server. You must modify the application code to use the substitution variables. The following examples demonstrate how to modify the helloworld-mdb quickstart that ships with JBoss EAP to use property substitution. See the helloworld-mdb-propertysubstitution quickstart for the completed working example. 4.6.1. Configure the Server to Enable Property Substitution To enable property substitution in the JBoss EAP server, you must set the annotation-property-replacement attribute in the ee subsystem of the server configuration to true . Back up the server configuration file. The helloworld-mdb-propertysubstitution quickstart example requires the full profile for a standalone server, so this is the EAP_HOME /standalone/configuration/standalone-full.xml file. If you are running your server in a managed domain, this is the EAP_HOME /domain/configuration/domain.xml file. Navigate to the JBoss EAP install directory and start the server with the full profile. Note For Windows Server, use the EAP_HOME \bin\standalone.bat script. Launch the management CLI. Note For Windows Server, use the EAP_HOME \bin\jboss-cli.bat script. Type the following command to enable annotation property substitution. You should see the following result. Review the changes to the JBoss EAP server configuration file. The ee subsystem should now contain the following XML. Example ee Subsystem Configuration <subsystem xmlns="urn:jboss:domain:ee:4.0"> ... <annotation-property-replacement>true</annotation-property-replacement> ... </subsystem> 4.6.2. Define the System Properties You can specify the system properties in the server configuration file or you can pass them as command line arguments when you start the JBoss EAP server. System properties defined in the server configuration file take precedence over those passed on the command line when you start the server. 4.6.2.1. Define the System Properties in the Server Configuration Launch the management CLI. Use the following command syntax to configure a system property in the JBoss EAP server. Syntax to Add a System Property The following system properties are configured for the helloworld-mdb-propertysubstitution quickstart. Example Commands to Add System Properties Review the changes to the JBoss EAP server configuration file. The following system properties should now appear in the after the <extensions> . Example System Properties Configuration <system-properties> <property name="property.helloworldmdb.queue" value="java:/queue/HELLOWORLDMDBPropQueue"/> <property name="property.helloworldmdb.topic" value="java:/topic/HELLOWORLDMDBPropTopic"/> <property name="property.connection.factory" value="java:/ConnectionFactory"/> </system-properties> 4.6.2.2. Pass the System Properties as Arguments on Server Start If you prefer, you can instead pass the arguments on the command line when you start the JBoss EAP server in the form of -D PROPERTY_NAME = PROPERTY_VALUE . The following is an example of how to pass the arguments for the system properties defined in the section. Example Server Start Command Passing System Properties 4.6.3. Modify the Application Code to Use the System Property Substitutions Replace the hard-coded @ActivationConfigProperty and @Resource annotation values with substitutions for the newly defined system properties. The following are examples of how to change the helloworld-mdb quickstart to use the newly defined system property substitutions. Change the @ActivationConfigProperty destination property value in the HelloWorldQueueMDB class to use the substitution for the system property. The @MessageDriven annotation should now look like this: HelloWorldQueueMDB Code Example @MessageDriven(name = "HelloWorldQueueMDB", activationConfig = { @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "USD{property.helloworldmdb.queue}"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge") }) Change the @ActivationConfigProperty destination property value in the HelloWorldTopicMDB class to use the substitution for the system property. The @MessageDriven annotation should now look like this: HelloWorldTopicMDB Code Example @MessageDriven(name = "HelloWorldQTopicMDB", activationConfig = { @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "USD{property.helloworldmdb.topic}"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge") }) Change the @Resource annotations in the HelloWorldMDBServletClient class to use the system property substitutions. The code should now look like this: HelloWorldMDBServletClient Code Example /** * Definition of the two Jakarta Messaging Service destinations used by the quickstart * (one queue and one topic). */ @JMSDestinationDefinitions( value = { @JMSDestinationDefinition( name = "java:/USD{property.helloworldmdb.queue}", interfaceName = "javax.jms.Queue", destinationName = "HelloWorldMDBQueue" ), @JMSDestinationDefinition( name = "java:/USD{property.helloworldmdb.topic}", interfaceName = "javax.jms.Topic", destinationName = "HelloWorldMDBTopic" ) }) /** * <p> * A simple servlet 3 as client that sends several messages to a queue or a topic. * </p> * * <p> * The servlet is registered and mapped to /HelloWorldMDBServletClient using the {@linkplain WebServlet * @HttpServlet}. * </p> * * @author Serge Pagop ([email protected]) * */ @WebServlet("/HelloWorldMDBServletClient") public class HelloWorldMDBServletClient extends HttpServlet { private static final long serialVersionUID = -8314035702649252239L; private static final int MSG_COUNT = 5; @Inject private JMSContext context; @Resource(lookup = "USD{property.helloworldmdb.queue}") private Queue queue; @Resource(lookup = "USD{property.helloworldmdb.topic}") private Topic topic; <!-- Remainder of code can be found in the `helloworld-mdb-propertysubstitution` quickstart. --> Modify the activemq-jms.xml file to use the system property substitution values. Example .activemq-jms.xml File <?xml version="1.0" encoding="UTF-8"?> <messaging-deployment xmlns="urn:jboss:messaging-activemq-deployment:1.0"> <server> <jms-destinations> <jms-queue name="HELLOWORLDMDBQueue"> <entry name="USD{property.helloworldmdb.queue}"/> </jms-queue> <jms-topic name="HELLOWORLDMDBTopic"> <entry name="USD{property.helloworldmdb.topic}"/> </jms-topic> </jms-destinations> </server> </messaging-deployment> Deploy the application. The application now uses the values specified by the system properties for the @Resource and @ActivationConfigProperty property values. 4.7. Activation Configuration Properties 4.7.1. Configuring MDBs Using Annotations You can configure activation properties by using the @MessageDriven element and sub-elements which correspond to the @ActivationConfigProperty annotation. @ActivationConfigProperty is an array of activation configuration properties for MDBs. The @ActivationConfigProperty annotation specification is as follows: @Target(value={}) @Retention(value=RUNTIME) public @interface ActivationConfigProperty { String propertyName(); String propertyValue(); } Example showing @ActivationConfigProperty @MessageDriven(name="MyMDBName", activationConfig = { @ActivationConfigProperty(propertyName="destinationLookup",propertyValue="queueA"), @ActivationConfigProperty(propertyName = "destinationType",propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"), }) 4.7.2. Configuring MDBs Using a Deployment Descriptor The <message-driven> element in the ejb-jar.xml defines the bean as an MDB. The <activation-config> and elements contain the MDB configuration via the activation-config-property elements. Example ejb-jar.xml <?xml version="1.1" encoding="UTF-8"?> <jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd" version="3.1"> <enterprise-beans> <message-driven> <ejb-name>MyMDBName</ejb-name> <ejb-class>org.jboss.tutorial.mdb_deployment_descriptor.bean.MyMDBName</ejb-class> <activation-config> <activation-config-property> <activation-config-property-name>destinationLookup</activation-config-property-name> <activation-config-property-value>queueA</activation-config-property-value> </activation-config-property> <activation-config-property> <activation-config-property-name>destinationType</activation-config-property-name> <activation-config-property-value>javax.jms.Queue</activation-config-property-value> </activation-config-property> <activation-config-property> <activation-config-property-name>acknowledgeMode</activation-config-property-name> <activation-config-property-value>Auto-acknowledge</activation-config-property-value> </activation-config-property> </activation-config> </message-driven> <enterprise-beans> </jboss:ejb-jar> Table 4.1. Activation Configuration Properties Defined by Jakarta Messaging Service Specifications Name Description destinationLookup The Java Naming and Directory Interface name of the queue or topic. This is a mandatory value. connectionFactoryLookup The lookup name of an administratively defined javax.jms.ConnectionFactory , javax.jms.QueueConnectionFactory or javax.jms.TopicConnectionFactory object that will be used to connect to the Jakarta Messaging provider from which the endpoint would receive messages. If not defined explicitly, pooled connection factory with name activemq-ra is used. destinationType The type of destination valid values are javax.jms.Queue or javax.jms.Topic . This is a mandatory value. messageSelector The value for a messageSelector property is a string which is used to select a subset of the available messages. Its syntax is based on a subset of the SQL 92 conditional expression syntax and is described in detail in Jakarta Messaging specification. Specifying a value for the messageSelector property on the ActivationSpec JavaBean is optional. acknowledgeMode The type of acknowledgement when not using transacted Jakarta Messaging. Valid values are Auto-acknowledge or Dups-ok-acknowledge . This is not a mandatory value. The default value is Auto-acknowledge . clientID The client ID of the connection. This is not a mandatory value. subscriptionDurability Whether topic subscriptions are durable. Valid values are Durable or NonDurable . This is not a mandatory value. The default value is NonDurable . subscriptionName The subscription name of the topic subscription. This is not a mandatory value. Table 4.2. Activation Configuration Properties Defined by JBoss EAP Name Description destination Using this property with useJNDI=true has the same meaning as destinationLookup . Using it with useJNDI=false , the destination is not looked up, but it is instantiated. You can use this property instead of destinationLookup . This is not a mandatory value. shareSubscriptions Whether the connection is configured to share subscriptions. The default value is False . user The user for the Jakarta Messaging connection. This is not a mandatory value. password The password for the Jakarta Messaging connection. This is not a mandatory value. maxSession The maximum number of concurrent sessions to use. This is not a mandatory value. The default value is 15 . transactionTimeout The transaction timeout for the session in milliseconds. This is not a mandatory value. If not specified or 0, the property is ignored and the transactionTimeout is not overridden and the default transactionTimeout defined in the Transaction Manager is used. useJNDI Whether or not use Java Naming and Directory Interface to look up the destination. The default value is True . jndiParams The Java Naming and Directory Interface parameters to use in the connection. Parameters are defined as name=value pairs separated by ; localTx Use local transaction instead of XA. The default value is False . setupAttempts Number of attempts to setup a Jakarta Messaging connection. It is possible that the MDB is deployed before the Jakarta Messaging resources are available. In that case, the resource adapter will try to set up several times until the resources are available. This applies only to inbound connections. The default value is -1 . setupInterval Interval in milliseconds between consecutive attempts to setup a Jakarta Messaging connection. This applies only to inbound connections. The default value is 2000 . rebalanceConnections Whether rebalancing of inbound connections is enabled or not. This parameter allows for rebalancing of all inbound connections when the underlying cluster topology changes. There is no rebalancing for outbound connections. The default value is False . deserializationWhiteList A comma-separated list of entries for the white list, which is the list of trusted classes and packages. This property is used by the Jakarta Messaging resource adapter to allow objects in the list to be deserialized. For more information, see Controlling Jakarta Messaging ObjectMessage Deserialization in Configuring Messaging for JBoss EAP. deserializationBlackList A comma-separated list of entries for the black list, which is the list of untrusted classes and packages. This property is used by the Jakarta Messaging resource adapter to prevent objects in the list from being deserialized. For more information, see Controlling Jakarta Messaging ObjectMessage Deserialization in Configuring Messaging for JBoss EAP. 4.7.3. Some Example Use Cases for Configuring MDBs Use case for an MDB receiving a message For a basic scenario when MDB receives a message, see the helloworld-mdb quickstart that is shipped with JBoss EAP. Use case for an MDB sending a message After processing the message you may need to inform other business systems or reply to the message. In this case, you can send the message from MDB as shown in the snippet below: package org.jboss.as.quickstarts.mdb; import javax.annotation.Resource; import javax.ejb.ActivationConfigProperty; import javax.ejb.MessageDriven; import javax.inject.Inject; import javax.jms.JMSContext; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageListener; import javax.jms.Queue; @MessageDriven(name = "MyMDB", activationConfig = { @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "queue/MyMDBRequest"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge") }) public class MyMDB implements MessageListener { @Inject private JMSContext jmsContext; @Resource(lookup = "java:/queue/ResponseDefault") private Queue defaultDestination; /** * @see MessageListener#onMessage(Message) */ public void onMessage(Message rcvMessage) { try { Message response = jmsContext.createTextMessage("Response for message " + rcvMessage.getJMSMessageID()); if (rcvMessage.getJMSReplyTo() != null) { jmsContext.createProducer().send(rcvMessage.getJMSReplyTo(), response); } else { jmsContext.createProducer().send(defaultDestination, response); } } catch (JMSException e) { throw new RuntimeException(e); } } } In the example above, after the MDB receives the message, it replies to either the destination specified in JMSReplyTo or the destination which is bound to the Java Naming and Directory Interface name java:/queue/ResponseDefault . Use case for an MDB configuring rebalancing of inbound connection @MessageDriven(name="MyMDBName", activationConfig = { @ActivationConfigProperty(propertyName = "destinationType",propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "queueA"), @ActivationConfigProperty(propertyName = "rebalanceConnections", propertyValue = "true") } ) | [
"/subsystem=ejb3/strict-max-bean-instance-pool=mdb-strict-max-pool:write-attribute(name=derive-size,value=undefined) /subsystem=ejb3/strict-max-bean-instance-pool=mdb-strict-max-pool:write-attribute(name=max-pool-size,value=1) reload",
"<?xml version=\"1.1\" encoding=\"UTF-8\"?> <jboss:ejb-jar xmlns:jboss=\"http://www.jboss.com/xml/ns/javaee\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:d=\"urn:delivery-active:1.1\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd\" version=\"3.1\" impl-version=\"2.0\"> <assembly-descriptor> <d:delivery> <ejb-name>HelloWorldQueueMDB</ejb-name> <d:active>false</d:active> </d:delivery> </assembly-descriptor> </jboss:ejb-jar>",
"@MessageDriven(name = \"HelloWorldMDB\", activationConfig = { @ActivationConfigProperty(propertyName = \"destinationType\", propertyValue = \"javax.jms.Queue\"), @ActivationConfigProperty(propertyName = \"destination\", propertyValue = \"queue/HELLOWORLDMDBQueue\"), @ActivationConfigProperty(propertyName = \"acknowledgeMode\", propertyValue = \"Auto-acknowledge\") }) @DeliveryActive(false) public class HelloWorldMDB implements MessageListener { public void onMessage(Message rcvMessage) { // } }",
"<dependency> <groupId>org.jboss.ejb3</groupId> <artifactId>jboss-ejb3-ext-api</artifactId> <version>2.2.0.Final</version> </dependency>",
"cd deployment=helloworld-mdb.war/subsystem=ejb3/message-driven-bean=HelloWorldQueueMDB",
":stop-delivery",
":start-delivery",
"<delivery> <ejb-name>MdbName<ejb-name> <delivery-group>passive</delivery-group> </delivery>",
"<delivery-groups> <delivery-group name=\"group\" active=\"true\"/> </delivery-groups>",
"/subsystem=ejb3/mdb-delivery-group=group:add /subsystem=ejb3/mdb-delivery-group=group:remove /subsystem=ejb3/mdb-delivery-group=group:write-attribute(name=active,value=true)",
"@MessageDriven(name = \"HelloWorldQueueMDB\", activationConfig = { @ActivationConfigProperty(propertyName = \"destinationType\", propertyValue = \"javax.jms.Queue\"), @ActivationConfigProperty(propertyName = \"destination\", propertyValue = \"queue/HELLOWORLDMDBQueue\"), @ActivationConfigProperty(propertyName = \"acknowledgeMode\", propertyValue = \"Auto-acknowledge\") }) @DeliveryGroup(\"delivery-group-1\") @DeliveryGroup(\"delivery-group-2\") public class HelloWorldQueueMDB implements MessageListener { }",
"<?xml version=\"1.1\" encoding=\"UTF-8\"?> <jboss:ejb-jar xmlns:jboss=\"http://www.jboss.com/xml/ns/javaee\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:c=\"urn:clustering:1.1\" xmlns:d=\"urn:delivery-active:1.2\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd\" version=\"3.1\" impl-version=\"2.0\"> <assembly-descriptor> <c:clustering> <ejb-name>HelloWorldQueueMDB</ejb-name> <c:clustered-singleton>true</c:clustered-singleton> </c:clustering> <d:delivery> <ejb-name>*</ejb-name> <d:group>delivery-group-1</d:group> <d:group>delivery-group-2</d:group> </d:delivery> </assembly-descriptor> </jboss:ejb-jar>",
"<c:clustering> <ejb-name>*</ejb-name> <c:clustered-singleton>true</c:clustered-singleton> </c:clustering>",
"<d:delivery> <ejb-name>HelloWorldTopicMDB</ejb-name> <d:group>my-mdb-delivery-group</d:group> </d:delivery>",
"<jboss xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:jee=\"http://java.sun.com/xml/ns/javaee\" xmlns:mdb=\"urn:resource-adapter-binding\"> <jee:assembly-descriptor> <mdb:resource-adapter-binding> <jee:ejb-name>MyMDB</jee:ejb-name> <mdb:resource-adapter-name>MyResourceAdapter.rar</mdb:resource-adapter-name> </mdb:resource-adapter-binding> </jee:assembly-descriptor> </jboss>",
"<mdb:resource-adapter-name>OtherDeployment.ear#MyResourceAdapter.rar</mdb:resource-adapter-name>",
"<mdb:resource-adapter-name>#MyResourceAdapter.rar</mdb:resource-adapter-name>",
"EAP_HOME /bin/standalone.sh -c standalone-full.xml",
"EAP_HOME /bin/jboss-cli.sh --connect",
"/subsystem=ee:write-attribute(name=annotation-property-replacement,value=true)",
"{\"outcome\" => \"success\"}",
"<subsystem xmlns=\"urn:jboss:domain:ee:4.0\"> <annotation-property-replacement>true</annotation-property-replacement> </subsystem>",
"/system-property= PROPERTY_NAME :add(value= PROPERTY_VALUE )",
"/system-property=property.helloworldmdb.queue:add(value=java:/queue/HELLOWORLDMDBPropQueue) /system-property=property.helloworldmdb.topic:add(value=java:/topic/HELLOWORLDMDBPropTopic) /system-property=property.connection.factory:add(value=java:/ConnectionFactory)",
"<system-properties> <property name=\"property.helloworldmdb.queue\" value=\"java:/queue/HELLOWORLDMDBPropQueue\"/> <property name=\"property.helloworldmdb.topic\" value=\"java:/topic/HELLOWORLDMDBPropTopic\"/> <property name=\"property.connection.factory\" value=\"java:/ConnectionFactory\"/> </system-properties>",
"EAP_HOME /bin/standalone.sh -c standalone-full.xml -Dproperty.helloworldmdb.queue=java:/queue/HELLOWORLDMDBPropQueue -Dproperty.helloworldmdb.topic=java:/topic/HELLOWORLDMDBPropTopic -Dproperty.connection.factory=java:/ConnectionFactory",
"@MessageDriven(name = \"HelloWorldQueueMDB\", activationConfig = { @ActivationConfigProperty(propertyName = \"destinationLookup\", propertyValue = \"USD{property.helloworldmdb.queue}\"), @ActivationConfigProperty(propertyName = \"destinationType\", propertyValue = \"javax.jms.Queue\"), @ActivationConfigProperty(propertyName = \"acknowledgeMode\", propertyValue = \"Auto-acknowledge\") })",
"@MessageDriven(name = \"HelloWorldQTopicMDB\", activationConfig = { @ActivationConfigProperty(propertyName = \"destinationLookup\", propertyValue = \"USD{property.helloworldmdb.topic}\"), @ActivationConfigProperty(propertyName = \"destinationType\", propertyValue = \"javax.jms.Topic\"), @ActivationConfigProperty(propertyName = \"acknowledgeMode\", propertyValue = \"Auto-acknowledge\") })",
"/** * Definition of the two Jakarta Messaging Service destinations used by the quickstart * (one queue and one topic). */ @JMSDestinationDefinitions( value = { @JMSDestinationDefinition( name = \"java:/USD{property.helloworldmdb.queue}\", interfaceName = \"javax.jms.Queue\", destinationName = \"HelloWorldMDBQueue\" ), @JMSDestinationDefinition( name = \"java:/USD{property.helloworldmdb.topic}\", interfaceName = \"javax.jms.Topic\", destinationName = \"HelloWorldMDBTopic\" ) }) /** * <p> * A simple servlet 3 as client that sends several messages to a queue or a topic. * </p> * * <p> * The servlet is registered and mapped to /HelloWorldMDBServletClient using the {@linkplain WebServlet * @HttpServlet}. * </p> * * @author Serge Pagop ([email protected]) * */ @WebServlet(\"/HelloWorldMDBServletClient\") public class HelloWorldMDBServletClient extends HttpServlet { private static final long serialVersionUID = -8314035702649252239L; private static final int MSG_COUNT = 5; @Inject private JMSContext context; @Resource(lookup = \"USD{property.helloworldmdb.queue}\") private Queue queue; @Resource(lookup = \"USD{property.helloworldmdb.topic}\") private Topic topic; <!-- Remainder of code can be found in the `helloworld-mdb-propertysubstitution` quickstart. -->",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <messaging-deployment xmlns=\"urn:jboss:messaging-activemq-deployment:1.0\"> <server> <jms-destinations> <jms-queue name=\"HELLOWORLDMDBQueue\"> <entry name=\"USD{property.helloworldmdb.queue}\"/> </jms-queue> <jms-topic name=\"HELLOWORLDMDBTopic\"> <entry name=\"USD{property.helloworldmdb.topic}\"/> </jms-topic> </jms-destinations> </server> </messaging-deployment>",
"@Target(value={}) @Retention(value=RUNTIME) public @interface ActivationConfigProperty { String propertyName(); String propertyValue(); }",
"@MessageDriven(name=\"MyMDBName\", activationConfig = { @ActivationConfigProperty(propertyName=\"destinationLookup\",propertyValue=\"queueA\"), @ActivationConfigProperty(propertyName = \"destinationType\",propertyValue = \"javax.jms.Queue\"), @ActivationConfigProperty(propertyName = \"acknowledgeMode\", propertyValue = \"Auto-acknowledge\"), })",
"<?xml version=\"1.1\" encoding=\"UTF-8\"?> <jboss:ejb-jar xmlns:jboss=\"http://www.jboss.com/xml/ns/javaee\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd\" version=\"3.1\"> <enterprise-beans> <message-driven> <ejb-name>MyMDBName</ejb-name> <ejb-class>org.jboss.tutorial.mdb_deployment_descriptor.bean.MyMDBName</ejb-class> <activation-config> <activation-config-property> <activation-config-property-name>destinationLookup</activation-config-property-name> <activation-config-property-value>queueA</activation-config-property-value> </activation-config-property> <activation-config-property> <activation-config-property-name>destinationType</activation-config-property-name> <activation-config-property-value>javax.jms.Queue</activation-config-property-value> </activation-config-property> <activation-config-property> <activation-config-property-name>acknowledgeMode</activation-config-property-name> <activation-config-property-value>Auto-acknowledge</activation-config-property-value> </activation-config-property> </activation-config> </message-driven> <enterprise-beans> </jboss:ejb-jar>",
"package org.jboss.as.quickstarts.mdb; import javax.annotation.Resource; import javax.ejb.ActivationConfigProperty; import javax.ejb.MessageDriven; import javax.inject.Inject; import javax.jms.JMSContext; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.MessageListener; import javax.jms.Queue; @MessageDriven(name = \"MyMDB\", activationConfig = { @ActivationConfigProperty(propertyName = \"destinationLookup\", propertyValue = \"queue/MyMDBRequest\"), @ActivationConfigProperty(propertyName = \"destinationType\", propertyValue = \"javax.jms.Queue\"), @ActivationConfigProperty(propertyName = \"acknowledgeMode\", propertyValue = \"Auto-acknowledge\") }) public class MyMDB implements MessageListener { @Inject private JMSContext jmsContext; @Resource(lookup = \"java:/queue/ResponseDefault\") private Queue defaultDestination; /** * @see MessageListener#onMessage(Message) */ public void onMessage(Message rcvMessage) { try { Message response = jmsContext.createTextMessage(\"Response for message \" + rcvMessage.getJMSMessageID()); if (rcvMessage.getJMSReplyTo() != null) { jmsContext.createProducer().send(rcvMessage.getJMSReplyTo(), response); } else { jmsContext.createProducer().send(defaultDestination, response); } } catch (JMSException e) { throw new RuntimeException(e); } } }",
"@MessageDriven(name=\"MyMDBName\", activationConfig = { @ActivationConfigProperty(propertyName = \"destinationType\",propertyValue = \"javax.jms.Queue\"), @ActivationConfigProperty(propertyName = \"destinationLookup\", propertyValue = \"queueA\"), @ActivationConfigProperty(propertyName = \"rebalanceConnections\", propertyValue = \"true\") } )"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_jakarta_enterprise_beans_applications/message_driven_beans-1 |
11.3. Configuring a Kerberos Client | 11.3. Configuring a Kerberos Client All that is required to set up a Kerberos 5 client is to install the client packages and provide each client with a valid krb5.conf configuration file. While ssh and slogin are the preferred methods of remotely logging in to client systems, Kerberos-aware versions of rsh and rlogin are still available, with additional configuration changes. Install the krb5-libs and krb5-workstation packages on all of the client machines. Supply a valid /etc/krb5.conf file for each client. Usually this can be the same krb5.conf file used by the Kerberos Distribution Center (KDC). For example: In some environments, the KDC is only accessible using an HTTPS Kerberos Key Distribution Center Proxy (KKDCP). In this case, make the following changes: Assign the URL of the KKDCP instead of the host name to the kdc and admin_server options in the [realms] section: For redundancy, the parameters kdc , admin_server , and kpasswd_server can be added multiple times using different KKDCP servers. On IdM clients, restart the sssd service to make the changes take effect: To use Kerberos-aware rsh and rlogin services, install the rsh package. Before a workstation can use Kerberos to authenticate users who connect using ssh , rsh , or rlogin , it must have its own host principal in the Kerberos database. The sshd , kshd , and klogind server programs all need access to the keys for the host service's principal. Using kadmin , add a host principal for the workstation on the KDC. The instance in this case is the host name of the workstation. Use the -randkey option for the kadmin 's addprinc command to create the principal and assign it a random key: The keys can be extracted for the workstation by running kadmin on the workstation itself and using the ktadd command. To use other Kerberos-aware network services, install the krb5-server package and start the services. The Kerberos-aware services are listed in Table 11.3, "Common Kerberos-aware Services" . Table 11.3. Common Kerberos-aware Services Service Name Usage Information ssh OpenSSH uses GSS-API to authenticate users to servers if the client's and server's configuration both have GSSAPIAuthentication enabled. If the client also has GSSAPIDelegateCredentials enabled, the user's credentials are made available on the remote system. OpenSSH also contains the sftp tool, which provides an FTP-like interface to SFTP servers and can use GSS-API. IMAP The cyrus-imap package uses Kerberos 5 if it also has the cyrus-sasl-gssapi package installed. The cyrus-sasl-gssapi package contains the Cyrus SASL plugins which support GSS-API authentication. Cyrus IMAP functions properly with Kerberos as long as the cyrus user is able to find the proper key in /etc/krb5.keytab , and the root for the principal is set to imap (created with kadmin ). An alternative to cyrus-imap can be found in the dovecot package, which is also included in Red Hat Enterprise Linux. This package contains an IMAP server but does not, to date, support GSS-API and Kerberos. | [
"yum install krb5-workstation krb5-libs",
"[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true allow_weak_crypto = true [realms] EXAMPLE.COM = { kdc = kdc.example.com.:88 admin_server = kdc.example.com default_domain = example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM",
"[realms] EXAMPLE.COM = { kdc = https://kdc.example.com/KdcProxy admin_server = https://kdc.example.com/KdcProxy kpasswd_server = https://kdc.example.com/KdcProxy default_domain = example.com }",
"systemctl restart sssd",
"addprinc -randkey host/server.example.com",
"ktadd -k /etc/krb5.keytab host/server.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/configuring_a_kerberos_5_client |
4.16. Appendix: Configuration files for Red Hat Gluster Storage Deployment | 4.16. Appendix: Configuration files for Red Hat Gluster Storage Deployment Filename: glusterfs-config.yaml Filename: gluster_instance.jinja Filename: path_utils.jinja | [
"Copyright 2015 Google Inc. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # glusterfs-config.yaml # The Gluster FS deployment consists of a primary pool and a secondary pool of resources, each on a separate zone. # imports: - path: gluster_instance.jinja - path: path_utils.jinja resources: - name: gluster_instance type: gluster_instance.jinja properties: namePrefix: rhgs numPrimaryReplicas: 10 primaryZone: us-central1-a secondaryZone: us-central1-b numSecondaryReplicas: 10 backupZone: europe-west1-b sourceImage: global/images/rhgs-image01 dataSourceImage: global/images/rhgs-data-image01 machineType: n1-highmem-4 network: default bootDiskType: pd-standard dataDiskType: pd-standard dataDiskSizeGb: 10230",
"Copyright 2015 Google Inc. All Rights Reserved. # Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. GlusterFs configuration variables # Required Cloud resource input parameters: * numPrimaryReplicas - number of instances to create in the primary zone * numSecondaryReplicas - number of instances to create in the secondary zone * namePrefix - hostname prefix The instance number (0 based) will be appended (\"-n<#><#>\") * primaryZone - Compute Engine zone for the instance (short name) * secondaryZone - Compute Engine zone for the instance (short name) * network - Compute Engine network for the instance (full URI) * image - Compute Engine image for the instance (full URI) * machineType - Compute Engine machine type for the instance (full URI) * bootDiskType - Compute Engine boot disk type for the instance (full URI) * dataDiskType: Compute Engine data disk type for the instance (full URI) * dataDiskSizeGb: Data disk size in Gigabytes {% import 'path_utils.jinja' as path_utils with context %} Grab the config properties {% set numPrimaryReplicas = properties[\"numPrimaryReplicas\"] + 1%} {% set numSecondaryReplicas = properties[\"numSecondaryReplicas\"] + 1 %} {% set image = properties[\"image\"] %} Macros and variables dealing with naming {% set prefix = properties[\"namePrefix\"] %} {% macro hostname(prefix, id) -%} {{ \"%s-n%02d\"|format(prefix, id) }} {%- endmacro %} {% macro diskname(prefix, id) -%} {{ \"%s-data-disk-n%02d\"|format(prefix, id) }} {%- endmacro %} Expand resource input parameters into full URLs {% set network = path_utils.networkPath(properties[\"network\"]) %} {% set primaryZone = properties[\"primaryZone\"] %} {% set bootDiskType = path_utils.diskTypePath( primaryZone, properties[\"bootDiskType\"]) %} {% set dataDiskType = path_utils.diskTypePath( primaryZone, properties[\"dataDiskType\"]) %} {% set machineType = path_utils.machineTypePath( primaryZone, properties[\"machineType\"]) %} resources: Add clone instances in the local Zone {% for n_suffix in range(1, numPrimaryReplicas) %} {% set namePrefix = prefix + '-primary' %} - type: compute.v1.disk name: {{ diskname(namePrefix, n_suffix) }} properties: zone: {{ primaryZone }} type: {{ dataDiskType }} sizeGb: {{ properties[\"dataDiskSizeGb\"] }} sourceImage: {{ properties[\"dataSourceImage\"] }} - type: compute.v1.instance name: {{ hostname(namePrefix, n_suffix) }} properties: zone: {{ primaryZone }} machineType: {{ machineType }} disks: # Request boot disk creation (mark for autodelete) - deviceName: boot type: PERSISTENT boot: true autoDelete: true initializeParams: sourceImage: {{ properties[\"sourceImage\"] }} diskType: {{ bootDiskType }} diskSizeGb: 10 # Attach the existing data disk (mark for autodelete) - deviceName: {{ diskname(namePrefix, n_suffix) }} source: USD(ref.{{ diskname(namePrefix, n_suffix) }}.selfLink) autoDelete: true type: PERSISTENT networkInterfaces: - network: {{ network }} accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT tags: items: - \"glusterfs-deployed-from-google-developer-console\" {% endfor %} Setup in-region replicas {% set network = path_utils.networkPath(properties[\"network\"]) %} {% set secondaryZone = properties[\"secondaryZone\"] %} {% set bootDiskType = path_utils.diskTypePath( secondaryZone, properties[\"bootDiskType\"]) %} {% set dataDiskType = path_utils.diskTypePath( secondaryZone, properties[\"dataDiskType\"]) %} {% set machineType = path_utils.machineTypePath( secondaryZone, properties[\"machineType\"]) %} {% for n_suffix in range(1, numPrimaryReplicas) %} {% set namePrefix = prefix + '-secondary' %} - type: compute.v1.disk name: {{ diskname(namePrefix, n_suffix) }} properties: zone: {{ secondaryZone }} type: {{ dataDiskType }} sizeGb: {{ properties[\"dataDiskSizeGb\"] }} sourceImage: {{ properties[\"dataSourceImage\"] }} - type: compute.v1.instance name: {{ hostname(namePrefix, n_suffix) }} properties: zone: {{ secondaryZone }} machineType: {{ machineType }} disks: # Request boot disk creation (mark for autodelete) - deviceName: boot type: PERSISTENT boot: true autoDelete: true initializeParams: sourceImage: {{ properties[\"sourceImage\"] }} diskType: {{ bootDiskType }} diskSizeGb: 10 # Attach the existing data disk (mark for autodelete) - deviceName: {{ diskname(namePrefix, n_suffix) }} source: USD(ref.{{ diskname(namePrefix, n_suffix) }}.selfLink) autoDelete: true type: PERSISTENT networkInterfaces: - network: {{ network }} accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT tags: items: - \"glusterfs-deployed-from-google-developer-console\" {% endfor %} Add clone instances in the remote Zone {% set backupZone = properties[\"backupZone\"] %} {% set bootDiskType = path_utils.diskTypePath( backupZone, properties[\"bootDiskType\"]) %} {% set dataDiskType = path_utils.diskTypePath( backupZone, properties[\"dataDiskType\"]) %} {% set machineType = path_utils.machineTypePath( backupZone, properties[\"machineType\"]) %} {% for n_suffix in range(1, numSecondaryReplicas) %} {% set namePrefix = prefix + '-backup' %} - type: compute.v1.disk name: {{ diskname(namePrefix, n_suffix) }} properties: zone: {{ backupZone }} type: {{ dataDiskType }} sizeGb: {{ properties[\"dataDiskSizeGb\"] }} sourceImage: {{ properties[\"dataSourceImage\"] }} - type: compute.v1.instance name: {{ hostname(namePrefix, n_suffix) }} properties: zone: {{ backupZone }} machineType: {{ machineType }} disks: # Request boot disk creation (mark for autodelete) - deviceName: boot type: PERSISTENT boot: true autoDelete: true initializeParams: sourceImage: {{ properties[\"sourceImage\"] }} diskType: {{ bootDiskType }} diskSizeGb: 10 # Attach the existing data disk (mark for autodelete) - deviceName: {{ diskname(namePrefix, n_suffix) }} source: USD(ref.{{ diskname(namePrefix, n_suffix) }}.selfLink) autoDelete: true type: PERSISTENT networkInterfaces: - network: {{ network }} accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT tags: items: - \"glusterfs-deployed-from-google-developer-console\" {% endfor %}",
"Copyright 2015 Google Inc. All rights reserved. # Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. path_utils.jinja # Jinja macros for expanding short resource names into full paths Must have reference to the global env object, so when including this file, use the jinja import \"with context\" option. {% macro projectPrefix() -%} {{ \"https://www.googleapis.com/compute/v1/projects/%s\"|format(env[\"project\"]) }} {%- endmacro %} {% macro imagePath(image) -%} {% if image.startswith(\"https://\") -%} {{ image }} {% elif image.startswith(\"debian-\") -%} {{ \"https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/\" + image }} {% elif image.startswith(\"windows-\") -%} {{ \"https://www.googleapis.com/compute/v1/projects/windows-cloud/global/images/\" + image }} {% endif -%} {%- endmacro %} {% macro machineTypePath(zone, machineType) -%} {% if machineType.startswith(\"https://\") -%} {{ machineType }} {% else -%} {{ \"%s/zones/%s/machineTypes/%s\"|format(projectPrefix(), zone, machineType) }} {% endif -%} {%- endmacro %} {% macro networkPath(network) -%} {% if network.startswith(\"https://\") -%} {{ network }} {% else -%} {{ \"%s/global/networks/%s\"|format(projectPrefix(), network) }} {% endif -%} {%- endmacro %} {% macro diskTypePath(zone, diskType) -%} {% if diskType.startswith(\"https://\") -%} {{ diskType }} {% else -%} {{ \"%s/zones/%s/diskTypes/%s\"|format(projectPrefix(), zone, diskType) }} {% endif -%} {%- endmacro %}"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/sect-Documentation-Deployment_Guide_for_Public_Cloud-Google_Cloud_Platform-RHGS_Configuration_Files |
Chapter 5. Known issues | Chapter 5. Known issues There are no known issues for this release. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_4_release_notes/known_issues |
3.2. Making Installation USB Media | 3.2. Making Installation USB Media You can use a USB drive or an SD card instead of a CD or DVD to create bootable media for installing Red Hat Enterprise Linux on 64-bit AMD, Intel, or ARM systems. The exact procedure varies depending on whether you want to perform it on a Linux or Windows system. You can create minimal boot media and full installation media using the same procedure; the only limitation is the capacity of the USB drive - it must have enough space to fit the entire image, which means roughly 450 MB for minimal boot media and 4.8 GB for full installation media. 3.2.1. Making Installation USB Media on Linux The following procedure assumes you are using a Linux system and that you have downloaded an appropriate ISO image as described in Chapter 2, Downloading Red Hat Enterprise Linux . On most Linux distributions, it will work without the need for installing any additional packages. Warning This procedure is destructive. Any data on the USB flash drive will be destroyed with no warning. Make sure that you specify the correct drive, and make sure that this drive does not contain any data you want to preserve. Many Linux distributions provide their own tools for creating live USB media: liveusb-creator on Fedora, usb-creator on Ubuntu, and others. Describing these tools is beyond the scope of this book; the following procedure will work on most Linux systems. Procedure 3.1. Making USB Media on Linux Connect a USB flash drive to the system and execute the dmesg command. A log detailing all recent events will be displayed. At the bottom of this log, you will see a set of messages caused by the USB flash drive you just connected. It will look like a set of lines similar to the following: Note the name of the connected device - in the above example, it is sdb . Log in as root : Provide your root password when prompted. Make sure that the device is not mounted. First, use the findmnt device command and the device name you found in the earlier steps. For example, if the device name is sdb , use the following command: If the command displays no output, you can proceed with the step. However, if the command does provide output, it means that the device was automatically mounted and you must unmount it before proceeding. A sample output will look similar to the following: Note the TARGET column. , use the umount target command to unmount the device: Use the dd command to write the installation ISO image directly to the USB device: Replace /image_directory/image.iso with the full path to the ISO image file you downloaded, device with the device name as reported by the dmesg command earlier, and blocksize with a reasonable block size (for example, 512k ) to speed up the writing process. The bs parameter is optional, but it can speed up the process considerably. Important Make sure to specify the output as the device name (for example, /dev/sda ), not as a name of a partition on the device (for example, /dev/sda1 ). For example, if the ISO image is located in /home/testuser/Downloads/rhel-server-7-x86_64-boot.iso and the detected device name is sdb , the command will look like the following: Wait for dd to finish writing the image to the device. Note that no progress bar is displayed; the data transfer is finished when the # prompt appears again. After the prompt is displayed, log out from the root account and unplug the USB drive. The USB drive is now ready to be used as a boot device. You can continue with Chapter 7, Booting the Installation on 64-bit AMD, Intel, and ARM systems on AMD, Intel, and ARM systems, or Chapter 12, Booting the Installation on IBM Power Systems on IBM Power Systems servers. Note Non-virtualized installations (known as "bare metal" installations) on IBM Power Systems servers require that the inst.stage2= boot option is specified. Refer to Section 23.1, "Configuring the Installation System at the Boot Menu" for information about the inst.stage2= boot option. 3.2.2. Making Installation USB Media on Windows The procedure of creating bootable USB media on Windows depends on which tool you use. There are many different utilities which allow you to write an ISO image to a USB drive. Red Hat recommends using the Fedora Media Writer , available for download at https://github.com/FedoraQt/MediaWriter/releases . Note Fedora Media Writer is a community product and is not supported by Red Hat. Any issues with the tool can be reported at https://github.com/FedoraQt/MediaWriter/issues . Important Transferring the ISO image file to the USB drive using Windows Explorer or a similar file manager will not work - you will not be able to boot from the device. Procedure 3.2. Making USB Media on Windows Download and install Fedora Media Writer . Download the Red Hat Enterprise Linux ISO image you want to use to create the media. (See Chapter 2, Downloading Red Hat Enterprise Linux for instructions on obtaining ISO images.) Plug in the USB drive you will be using to create bootable media. Open Fedora Media Writer . In the main window, click Custom Image and select the downloaded Red Hat Enterprise Linux ISO image. From the drop-down menu, select the drive you want to use. If the drive does not appear, verify that the USB drive is connected and restart Fedora Media Writer . Click Write to disk . The boot media creation process will begin. Do not unplug the drive until the operation completes. Depending on the size of the ISO image and the write speed of the USB drive, writing the image can take several minutes. Figure 3.1. Fedora Media Writer When the creation process finishes and the Complete! message appears, unmount the USB drive using the Safely remove hardware icon in the system's notification area. The USB drive is now ready to be used as a boot device. You can continue with Chapter 7, Booting the Installation on 64-bit AMD, Intel, and ARM systems on AMD, Intel, and ARM systems, or Chapter 12, Booting the Installation on IBM Power Systems on IBM Power Systems servers. 3.2.3. Making Installation USB Media on Mac OS X This procedure involves using the dd command line tool to write the installation image to a USB flash drive. Note that some steps involve use of the sudo command, which is only available when logged in with an administrator account that requires a password. Warning All data on the USB flash drive will be deleted by this procedure. Procedure 3.3. Making USB Media on Mac OS X Connect a USB flash drive to the system and identify the device path with the diskutil list command. The device path has the format of /dev/disk number , where number is the number of the disk. The disks are numbered starting at zero (0). Disk 0 is likely to be the OS X recovery disk, and Disk 1 is likely to be your main OS X installation. In the following example, it is disk2 : To identify your USB flash drive, compare the NAME , TYPE and SIZE columns to what you know about your flash drive. For example, the NAME should be the same as the title of the flash drive icon in the Finder . You can also compare these values to those in the flash drive's information panel; right-click on the drive icon and select Get Info . Use the diskutil unmountDisk command to unmount the flash drive's filesystem volumes: When you do this, the icon for the flash drive disappears from your desktop. If it does not, you might have identified the wrong disk. If you attempt to unmount the system disk accidentally, you get a failed to unmount error. Use the dd command as a parameter of the sudo command to write the ISO image to the flash drive: Note Mac OS X provides both a block ( /dev/disk* ) and character device ( /dev/rdisk* ) file for each storage device. Writing an image to the /dev/rdisk number character device is faster than to the /dev/disk number block device. Example 3.1. Writing an ISO Image to a Disk To write the /Users/ user_name /Downloads/rhel-server-7-x86_64-boot.iso file to the /dev/rdisk2 device: Wait for the command to finish. Note that no progress bar is displayed; however, to check the status of the operation while it is still running, press Ctrl + t in the terminal: The speed of the data transfer depends on the speed of your USB ports and the flash drive. After the prompt is displayed again, the data transfer is finished. You can then unplug the flash drive. The flash drive is now ready to be used as a boot device. You can continue with Chapter 7, Booting the Installation on 64-bit AMD, Intel, and ARM systems on AMD64 and Intel 64 systems or Chapter 12, Booting the Installation on IBM Power Systems on IBM Power Systems servers. | [
"[ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk",
"su -",
"findmnt /dev/sdb",
"findmnt /dev/sdb TARGET SOURCE FSTYPE OPTIONS /mnt/iso /dev/sdb iso9660 ro,relatime",
"umount /mnt/iso",
"dd if= /image_directory/image.iso of=/dev/ device bs= blocksize",
"dd if=/home/testuser/Downloads/rhel-server-7-x86_64-boot.iso of=/dev/sdb bs=512k",
"diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.3 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 400.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Apple_CoreStorage 98.8 GB disk0s4 5: Apple_Boot Recovery HD 650.0 MB disk0s5 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS YosemiteHD *399.6 GB disk1 Logical Volume on disk0s1 8A142795-8036-48DF-9FC5-84506DFBB7B2 Unlocked Encrypted /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *8.0 GB disk2 1: Windows_NTFS SanDisk USB 8.0 GB disk2s1",
"diskutil unmountDisk /dev/disk number Unmount of all volumes on disk number was successful",
"sudo dd if= /path/to/image.iso of=/dev/rdisk number bs=1m>",
"sudo dd if=/Users/ user_name /Downloads/rhel-server-7-x86_64-boot.iso of=/dev/rdisk2",
"load: 1.02 cmd: dd 3668 uninterruptible 0.00u 1.91s 112+0 records in 111+0 records out 116391936 bytes transferred in 114.834860 secs (1013559 bytes/sec)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-making-usb-media |
Virtualization | Virtualization OpenShift Container Platform 4.14 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/index |
11.3.2. Removing an LVM2 Logical Volume for Swap | 11.3.2. Removing an LVM2 Logical Volume for Swap The swap logical volume cannot be in use (no system locks or processes on the volume). The easiest way to achieve this it to boot your system in rescue mode. Refer to Chapter 5, Basic System Recovery for instructions on booting into rescue mode. When prompted to mount the file system, select Skip . To remove a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you want to remove): Disable swapping for the associated logical volume: Remove the LVM2 logical volume of size 512 MB: Remove the following entry from the /etc/fstab file: Test that the logical volume has been extended properly: | [
"swapoff -v /dev/VolGroup00/LogVol02",
"lvm lvremove /dev/VolGroup00/LogVol02",
"/dev/VolGroup00/LogVol02 swap swap defaults 0 0",
"cat /proc/swaps # free"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Removing_Swap_Space-Removing_an_LVM2_Logical_Volume_for_Swap |
Chapter 4. Accessing the registry | Chapter 4. Accessing the registry Use the following sections for instructions on accessing the registry, including viewing logs and metrics, as well as securing and exposing the registry. You can access the registry directly to invoke podman commands. This allows you to push images to or pull them from the integrated registry directly using operations like podman push or podman pull . To do so, you must be logged in to the registry using the podman login command. The operations you can perform depend on your user permissions, as described in the following sections. 4.1. Prerequisites You have access to the cluster as a user with the cluster-admin role. You must have configured an identity provider (IDP). For pulling images, for example when using the podman pull command, the user must have the registry-viewer role. To add this role, run the following command: USD oc policy add-role-to-user registry-viewer <user_name> For writing or pushing images, for example when using the podman push command: The user must have the registry-editor role. To add this role, run the following command: USD oc policy add-role-to-user registry-editor <user_name> Your cluster must have an existing project where the images can be pushed to. 4.2. Accessing the registry directly from the cluster You can access the registry from inside the cluster. Procedure Access the registry from the cluster by using internal routes: Access the node by getting the node's name: USD oc get nodes USD oc debug nodes/<node_name> To enable access to tools such as oc and podman on the node, change your root directory to /host : sh-4.2# chroot /host Log in to the container image registry by using your access token: sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443 sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000 You should see a message confirming login, such as: Login Succeeded! Note You can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure. Since the Image Registry Operator creates the route, it will likely be similar to default-route-openshift-image-registry.<cluster_name> . Perform podman pull and podman push operations against your registry: Important You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project. In the following examples, use: Component Value <registry_ip> 172.30.124.220 <port> 5000 <project> openshift <image> image <tag> omitted (defaults to latest ) Pull an arbitrary image: sh-4.2# podman pull <name.io>/<image> Tag the new image with the form <registry_ip>:<port>/<project>/<image> . The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry: sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image> Note You must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the podman push in the step will fail. To test, you can create a new project to push the image. Push the newly tagged image to your registry: sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image> Note When pushing images to the internal registry, the repository name must use the <project>/<name> format. Using multiple project levels in the repository name results in an authentication error. 4.3. Checking the status of the registry pods As a cluster administrator, you can list the image registry pods running in the openshift-image-registry project and check their status. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure List the pods in the openshift-image-registry project and view their status: USD oc get pods -n openshift-image-registry Example output NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m 4.4. Viewing registry logs You can view the logs for the registry by using the oc logs command. Procedure Use the oc logs command with deployments to view the logs for the container image registry: USD oc logs deployments/image-registry -n openshift-image-registry Example output 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 4.5. Accessing registry metrics The OpenShift Container Registry provides an endpoint for Prometheus metrics . Prometheus is a stand-alone, open source systems monitoring and alerting toolkit. The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. Procedure You can access the metrics by running a metrics query using a cluster role. Cluster role Create a cluster role if you do not already have one to access the metrics: USD cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF Add this role to a user, run the following command: USD oc adm policy add-cluster-role-to-user prometheus-scraper <username> Metrics query Get the user token. openshift: USD oc whoami -t Run a metrics query in node or inside a pod, for example: USD curl --insecure -s -u <user>:<secret> \ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20 Example output # HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. # TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit="9f72191",gitVersion="v3.11.0+9f72191-135-dirty",major="3",minor="11+"} 1 # HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. # TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type="Hit"} 5 imageregistry_digest_cache_requests_total{type="Miss"} 24 # HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. # TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type="Hit"} 33 imageregistry_digest_cache_scoped_requests_total{type="Miss"} 44 # HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. # TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 # HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. # TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method="get",quantile="0.5"} 0.01296087 imageregistry_http_request_duration_seconds{method="get",quantile="0.9"} 0.014847248 imageregistry_http_request_duration_seconds{method="get",quantile="0.99"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method="get"} 12.260727916000022 1 The <user> object can be arbitrary, but <secret> tag must use the user token. 4.6. Additional resources For more information on allowing pods in a project to reference images in another project, see Allowing pods to reference images across projects . A kubeadmin can access the registry until deleted. See Removing the kubeadmin user for more information. For more information on configuring an identity provider, see Understanding identity provider configuration . | [
"oc policy add-role-to-user registry-viewer <user_name>",
"oc policy add-role-to-user registry-editor <user_name>",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull <name.io>/<image>",
"sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>",
"oc get pods -n openshift-image-registry",
"NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m",
"oc logs deployments/image-registry -n openshift-image-registry",
"2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF",
"oc adm policy add-cluster-role-to-user prometheus-scraper <username>",
"openshift: oc whoami -t",
"curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20",
"HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/registry/accessing-the-registry |
Planning your deployment | Planning your deployment Red Hat OpenShift Data Foundation 4.18 Important considerations when deploying Red Hat OpenShift Data Foundation 4.18 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see Installing on IBM Power . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. Note OpenShift Data Foundation's default configuration for MCG is optimized for low resource consumption and not performance. If you plan to use MCG often, see information about increasing resource limits in the knowledebase article Performance tuning guide for Multicloud Object Gateway . 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. For Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly a pre-existing one. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions . Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE ROSA with hosted control planes (HCP) Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications. Chapter 4. External storage services Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters. Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. OpenShift Data Foundation supports cluster wide encryption with and without Key Management System (KMS). Cluster wide encryption with KMS is supported using the following service providers: HashiCorp Vault Thales Cipher Trust Manager Security common practices require periodic encryption key rotation. OpenShift Data Foundation automatically rotates encryption keys stored in kubernetes secret (non-KMS) and Vault on a weekly basis. However, key rotation for Vault KMS must be enabled after the storage cluster creation and does not happen by default. For more information refer to the deployment guides. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected then the administrator has to provide the vault token that provides access to the backend path in Vault, where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected then the administrator has to provide the name of the role configured in Vault that provides access to the backend path, where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol (msgr2) Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment while the cluster is being created. See the deployment guide for your environment for instructions on enabling data encryption in-transit during cluster creation. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx. Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx. Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation. Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. Virtualized OpenShift nodes using logical CPU threads, also known as simultaneous multithreading (SMT) for AMD EPYC CPUs or hyperthreading with Intel CPUs, calculate their core utilization for OpenShift subscriptions based on the number of cores/CPUs assigned to the node, however each subscription covers 4 vCPUs/cores when logical CPU threads are used. Red Hat's subscription management tools assume logical CPU threads are enabled by default on all systems. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.17 is supported only on OpenShift Container Platform version 4.17 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 7.0 or later vSphere 8.0 or later For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, an azure disk via the azure-disk provisioner. 7.1.5. Google Cloud Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.9. ROSA with hosted control planes (HCP) Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides AWS EBS volumes via gp3-csi provisioner. 7.1.10. Any platform Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate avaliable resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.3.6. Resource requirements for performance profiles OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment. Table 7.7. Recommended resource requirement for different performance profiles Performance profile CPU Memory Lean 24 72 GiB Balanced 30 72 GiB Performance 45 96 GiB Important Make sure to select the profiles based on the available free resources as you might already be running other workloads. 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.8. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.9. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks . You can configure your Multus networks to use IPv4 or IPv6 as a technology preview. This works only for Multus networks that are pure IPv4 or pure IPv6. Networks cannot be mixed mode. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information. 8.2.1. Multus prerequisites In order for Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. This section will help clarify questions that could arise. Two basic requirements must be met: OpenShift hosts must be able to route successfully to the Multus public network. Pods on the Multus public network must be able to route successfully to OpenShift hosts. These two requirements can be broken down further as follows: For routing Kubernetes hosts to the Multus public network, each host must ensure the following: The host must have an interface connected to the Multus public network (the "public-network-interface"). The "public-network-interface" must have an IP address. A route must exist to direct traffic destined for pods on the Multus public network through the "public-network-interface". For routing pods on the Multus public network to Kubernetes hosts, the public NetworkAttachmentDefinition must be configured to ensure the following: The definition must have its IP Address Management (IPAM) configured to route traffic destined for nodes through the network. To ensure routing between the two networks works properly, no IP address assigned to a node can overlap with any IP address assigned to a pod on the Multus public network. Generally, both the NetworkAttachmentDefinition, and node configurations must use the same network technology (Macvlan) to connect to the Multus public network. Node configurations and pod configurations are interrelated and tightly coupled. Both must be planned at the same time, and OpenShift Data Foundation cannot support Multus public networks without both. The "public-network-interface" must be the same for both. Generally, the connection technology (Macvlan) should also be the same for both. IP range(s) in the NetworkAttachmentDefinition must be encoded as routes on nodes, and, in mirror, IP ranges for nodes must be encoded as routes in the NetworkAttachmentDefinition. Some installations might not want to use the same public network IP address range for both pods and nodes. In the case where there are different ranges for pods and nodes, additional steps must be taken to ensure each range routes to the other so that they act as a single, contiguous network.These requirements require careful planning. See Multus examples to help understand and implement these requirements. Tip There are often ten or more OpenShift Data Foundation pods per storage node. The pod address space usually needs to be several times larger (or more) than the host address space. OpenShift Container Platform recommends using the NMState operator's NodeNetworkConfigurationPolicies as a good method of configuring hosts to meet host requirements. Other methods can be used as well if needed. 8.2.1.1. Multus network address space sizing Networks must have enough addresses to account for the number of storage pods that will attach to the network, plus some additional space to account for failover events. It is highly recommended to also plan ahead for future storage cluster expansion and estimate how large the OpenShift Container Platform and OpenShift Data Foundation clusters may grow in the future. Reserving addresses for future expansion means that there is lower risk of depleting the IP address pool unexpectedly during expansion. It is safest to allocate 25% more addresses (or more) than the total maximum number of addresses that are expected to be needed at one time in the storage cluster's lifetime. This helps lower the risk of depleting the IP address pool during failover and maintenance. For ease of writing corresponding network CIDR configurations, rounding totals up to the nearest power of 2 is also recommended. Three ranges must be planned: If used, the public Network Attachment Definition address space must include enough IPs for the total number of ODF pods running in the openshift-storage namespace If used, the cluster Network Attachment Definition address space must include enough IPs for the total number of OSD pods running in the openshift-storage namespace If the Multus public network is used, the node public network address space must include enough IPs for the total number of OpenShift nodes connected to the Multus public network. Note If the cluster uses a unified address space for the public Network Attachment Definition and node public network attachments, add these two requirements together. This is relevant, for example, if DHCP is used to manage IPs for the public network. Important For users with environments with piecewise CIDRs, that is one network with two or more different CIDRs, auto-detection is likely to find only a single CIDR, meaning Ceph daemons may fail to start or fail to connect to the network. See this knowledgebase article for information to mitigate this issue. 8.2.1.1.1. Recommendation The following recommendation suffices for most organizations. The recommendation uses the last 6.25% (1/16) of the reserved private address space (192.168.0.0/16), assuming the beginning of the range is in use or otherwise desirable. Approximate maximums (accounting for 25% overhead) are given. Table 8.1. Multus recommendations Network Network range CIDR Approximate maximums Public Network Attachment Definition 192.168.240.0/21 1,600 total ODF pods Cluster Network Attachment Definition 192.168.248.0/22 800 OSDs Node public network attachments 192.168.252.0/23 400 total nodes 8.2.1.1.2. Calculation More detailed address space sizes can be determined as follows: Determine the maximum number of OSDs that are likely to be needed in the future. Add 25%, then add 5. Round the result up to the nearest power of 2. This is the cluster address space size. Begin with the un-rounded number calculated in step 1. Add 64, then add 25%. Round the result up to the nearest power of 2. This is the public address space size for pods. Determine the maximum number of total OpenShift nodes (including storage nodes) that are likely to be needed in the future. Add 25%. Round the result up to the nearest power of 2. This is the public address space size for nodes. 8.2.1.2. Verifying requirements have been met After configuring nodes and creating the Multus public NetworkAttachmentDefinition (see Creating network attachment definitions ) check that the node configurations and NetworkAttachmentDefinition configurations are compatible. To do so, verify that each node can ping pods via the public network. Start a daemonset similar to the following example: List the Multus public network IPs assigned to test pods using a command like the following example. This example command lists all IPs assigned to all test pods (each will have 2 IPs). From the output, it is easy to manually extract the IPs associated with the Multus public network. In the example, test pod IPs on the Multus public network are: 192.168.20.22 192.168.20.29 192.168.20.23 Check that each node (NODE) can reach all test pod IPs over the public network: If any node does not get a successful ping to a running pod, it is not safe to proceed. Diagnose and fix the issue, then repeat this testing. Some reasons you may encounter a problem include: The host may not be properly attached to the Multus public network (via Macvlan) The host may not be properly configured to route to the pod IP range The public NetworkAttachmentDefinition may not be properly configured to route back to the host IP range The host may have a firewall rule blocking the connection in either direction The network switch may have a firewall or security rule blocking the connection Suggested debugging steps: Ensure nodes can ping each other over using public network "shim" IPs Ensure the output of ip address 8.2.2. Multus examples The relevant network plan for this cluster is as follows: A dedicated NIC provides eth0 for the Multus public network Macvlan will be used to attach OpenShift pods to eth0 The IP range 192.168.0.0/16 is free in the example cluster - pods and nodes will share this IP range on the Multus public network Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 Kubernetes hosts, more than the example organization will ever need) Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) The example organization does not want to use DHCP unless necessary; therefore, nodes will have IPs on the Multus network (via eth0) assigned statically using the NMState operator 's NodeNetworkConfigurationPolicy resources With DHCP unavailable, Whereabouts will be used to assign IPs to the Multus public network because it is easy to use out of the box There are 3 compute nodes in the OpenShift cluster on which OpenShift Data Foundation also runs: compute-0, compute-1, and compute-2 Nodes' network policies must be configured to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Generally speaking, the host must connect to the Multus public network using the same technology that pods do. Pod connections are configured in the Network Attachment Definition. Because the host IP range is a subset of the whole range, hosts are not able to route to pods simply by IP assignment. A route must be added to hosts to allow them to route to the whole 192.168.0.0/16 range. NodeNetworkConfigurationPolicy desiredState specs will look like the following: For static IP management, each node must have a different NodeNetworkConfigurationPolicy. Select separate nodes for each policy to configure static networks. A "shim" interface is used to connect hosts to the Multus public network using the same technology as the Network Attachment Definition will use. The host's "shim" must be of the same type as planned for pods, macvlan in this example. The interface must match the Multus public network interface selected in planning, eth0 in this example. The ipv4 (or ipv6` ) section configures node IP addresses on the Multus public network. IPs assigned to this node's shim must match the plan. This example uses 192.168.252.0/22 for node IPs on the Multus public network. For static IP management, don't forget to change the IP for each node. The routes section instructs nodes how to reach pods on the Multus public network. The route destination(s) must match the CIDR range planned for pods. In this case, it is safe to use the entire 192.168.0.0/16 range because it won't affect nodes' ability to reach other nodes over their "shim" interfaces. In general, this must match the CIDR used in the Multus public NetworkAttachmentDefinition. The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' exclude option to simplify the range request. The Whereabouts routes[].dst option ensures pods route to hosts via the Multus public network. This must match the plan for how to attach pods to the Multus public network. Nodes must attach using the same technology, Macvlan. The interface must match the Multus public network interface selected in planning, eth0 in this example. The plan for this example uses whereabouts instead of DHCP for assigning IPs to pods. For this example, it was decided that pods could be assigned any IP in the range 192.168.0.0/16 with the exception of a portion of the range allocated to nodes (see 5). whereabouts provides an exclude directive that allows easily excluding the range allocated for nodes from its pool. This allows keeping the range directive (see 4 ) simple. The routes section instructs pods how to reach nodes on the Multus public network. The route destination ( dst ) must match the CIDR range planned for nodes. 8.2.3. Holder pod deprecation Due to the recurring maintenance impact of holder pods during upgrade (holder pods are present when Multus is enabled), holder pods are deprecated in the ODF v4.18 release and targeted for removal in the ODF v4.18 release. This deprecation requires completing additional network configuration actions before removing the holder pods. In ODF v4.16, clusters with Multus enabled are upgraded to v4.17 following standard upgrade procedures. After the ODF cluster (with Multus enabled) is successfully upgraded to v4.17, administrators must then complete the procedure documented in the article Disabling Multus holder pods to disable and remove holder pods. Be aware that this disabling procedure is time consuming; however, it is not critical to complete the entire process immediately after upgrading to v4.17. It is critical to complete the process before ODF is upgraded to v4.18. 8.2.4. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.5. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.6. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports the macvlan driver, which includes the following features: Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Uses less CPU and provides better throughput than Linux bridge or ipvlan . Bridge mode is almost always the best choice. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.7. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal. Chapter 9. Disaster Recovery Disaster Recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. OpenShift Data Foundation provides High Availability (HA) & DR solutions for stateful apps which are broadly categorized into two broad categories: Metro-DR : Single Region and cross data center protection with no data loss. Regional-DR : Cross Region protection with minimal potential data loss. Disaster Recovery with stretch cluster : Single OpenShift Data Foundation cluster is stretched between two different locations to provide the storage infrastructure with disaster recovery capabilities. 9.1. Metro-DR Metropolitan disaster recovery (Metro-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM), Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. This release of Metro-DR solution provides volume persistent data and metadata replication across sites that are geographically dispersed. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. Note Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. For detailed solution requirements, see Metro-DR requirements , deployment requirements for Red Hat Ceph Storage stretch cluster with arbiter and RHACM requirements . 9.2. Regional-DR Regional disaster recovery (Regional-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM) and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. It is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. Red Hat OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook and it's enhanced with the ability to: Enable pools for mirroring. Automatically mirror images across RBD pools. Provides csi-addons to manage per Persistent Volume Claim mirroring. This release of Regional-DR supports Multi-Cluster configuration that is deployed across different regions and data centers. For example, a 2-way replication across two managed clusters located in two different regions or data centers. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Regional disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For detailed solution requirements, see Regional-DR requirements and RHACM requirements . 9.3. Disaster Recovery with stretch cluster In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This feature is currently intended for deployment in the OpenShift Container Platform on-premises and in the same location. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. To use the stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see Configuring OpenShift Data Foundation for stretch cluster . To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Supported Unsupported Disaster Recovery for brownfield deployments Unsupported Supported Automatic scaling of RGW Unsupported Unsupported Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z Deploying OpenShift Data Foundation on any platform External mode Deploying OpenShift Data Foundation in external mode Internal or external For deploying multiple clusters, see Deploying multiple OpenShift Data Foundation clusters . | [
"apiVersion: apps/v1 kind: DaemonSet metadata: name: multus-public-test namespace: openshift-storage labels: app: multus-public-test spec: selector: matchLabels: app: multus-public-test template: metadata: labels: app: multus-public-test annotations: k8s.v1.cni.cncf.io/networks: openshift-storage/public-net # spec: containers: - name: test image: quay.io/ceph/ceph:v18 # image known to have 'ping' installed command: - sleep - infinity resources: {}",
"oc -n openshift-storage describe pod -l app=multus-public-test | grep -o -E 'Add .* from .*' Add eth0 [10.128.2.86/23] from ovn-kubernetes Add net1 [192.168.20.22/24] from default/public-net Add eth0 [10.129.2.173/23] from ovn-kubernetes Add net1 [192.168.20.29/24] from default/public-net Add eth0 [10.131.0.108/23] from ovn-kubernetes Add net1 [192.168.20.23/24] from default/public-net",
"oc debug node/NODE Starting pod/NODE-debug To use host binaries, run `chroot /host` Pod IP: **** If you don't see a command prompt, try pressing enter. sh-5.1# chroot /host sh-5.1# ping 192.168.20.22 PING 192.168.20.22 (192.168.20.22) 56(84) bytes of data. 64 bytes from 192.168.20.22: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 192.168.20.22: icmp_seq=2 ttl=64 time=0.056 ms ^C --- 192.168.20.22 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1046ms rtt min/avg/max/mdev = 0.056/0.074/0.093/0.018 ms sh-5.1# ping 192.168.20.29 PING 192.168.20.29 (192.168.20.29) 56(84) bytes of data. 64 bytes from 192.168.20.29: icmp_seq=1 ttl=64 time=0.403 ms 64 bytes from 192.168.20.29: icmp_seq=2 ttl=64 time=0.181 ms ^C --- 192.168.20.29 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1007ms rtt min/avg/max/mdev = 0.181/0.292/0.403/0.111 ms sh-5.1# ping 192.168.20.23 PING 192.168.20.23 (192.168.20.23) 56(84) bytes of data. 64 bytes from 192.168.20.23: icmp_seq=1 ttl=64 time=0.329 ms 64 bytes from 192.168.20.23: icmp_seq=2 ttl=64 time=0.227 ms ^C --- 192.168.20.23 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1047ms rtt min/avg/max/mdev = 0.227/0.278/0.329/0.051 ms",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-0 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-0 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-0 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-1 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-1 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-1 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-2 # [1] namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-2 # [2] desiredState: Interfaces: [3] - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan # [4] state: up mac-vlan: base-iface: eth0 # [5] mode: bridge promiscuous: true ipv4: # [6] enabled: true dhcp: false address: - ip: 192.168.252.2 # STATIC IP FOR compute-2 # [7] prefix-length: 22 routes: # [8] config: - destination: 192.168.0.0/16 # [9] next-hop-interface: odf-pub-shim",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", # [1] \"master\": \"eth0\", # [2] \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", # [3] \"range\": \"192.168.0.0/16\", # [4] \"exclude\": [ \"192.168.252.0/22\" # [5] ], \"routes\": [ # [6] {\"dst\": \"192.168.252.0/22\"} # [7] ] } }'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/planning_your_deployment/platform-requirements_rhocs |
31.3. Taking a Site Offline | 31.3. Taking a Site Offline In Red Hat JBoss Data Grid's Cross-datacenter replication configuration, if backing up to one site fails a certain number of times during a time interval, that site can be marked as offline automatically. This feature removes the need for manual intervention by an administrator to mark the site as offline. It is possible to configure JBoss Data Grid to take down a site automatically when specified conditions are met, or for an administrator to manually take down a site: Configure automatically taking a site offline: Declaratively in Remote Client-Server mode. Declaratively in Library mode. Using the programmatic method. Manually taking a site offline: Using JBoss Operations Network (JON). Using the JBoss Data Grid Command Line Interface (CLI). Report a bug 31.3.1. Taking a Site Offline (Remote Client-Server Mode) In Red Hat JBoss Data Grid's Remote Client-Server mode, the take-offline element is added to the backup element to configure when a site is automatically taken offline. Example 31.2. Taking a Site Offline in Remote Client-Server Mode The take-offline element use the following parameters to configure when to take a site offline: The after-failures parameter specifies the number of times attempts to contact a site can fail before the site is taken offline. The min-wait parameter specifies the number (in milliseconds) to wait to mark an unresponsive site as offline. The site is offline when the min-wait period elapses after the first attempt, and the number of failed attempts specified in the after-failures parameter occur. Report a bug 31.3.2. Taking a Site Offline (Library Mode) In Red Hat JBoss Data Grid's Library mode, use the backupFor element after defining all back up sites within the backups element: Example 31.3. Taking a Site Offline in Library Mode Add the takeOffline element to the backup element to configure automatically taking a site offline. The afterFailures parameter specifies the number of times attempts to contact a site can fail before the site is taken offline. The default value ( 0 ) allows an infinite number of failures if minTimeToWait is less than 0 . If the minTimeToWait is not less than 0 , afterFailures behaves as if the value is negative. A negative value for this parameter indicates that the site is taken offline after the time specified by minTimeToWait elapses. The minTimeToWait parameter specifies the number (in milliseconds) to wait to mark an unresponsive site as offline. The site is taken offline after the number attempts specified in the afterFailures parameter conclude and the time specified by minTimeToWait after the first failure has elapsed. If this parameter is set to a value smaller than or equal to 0 , this parameter is disregarded and the site is taken offline based solely on the afterFailures parameter. Report a bug 31.3.3. Taking a Site Offline (Programmatically) To configure taking a Cross-datacenter replication site offline automatically in Red Hat JBoss Data Grid programmatically: Example 31.4. Taking a Site Offline Programmatically Report a bug 31.3.4. Taking a Site Offline via JBoss Operations Network (JON) A site can be taken offline in Red Hat JBoss Data Grid using the JBoss Operations Network operations. For a list of the metrics, see Section 22.6.2, "JBoss Operations Network Plugin Operations" Report a bug 31.3.5. Taking a Site Offline via the CLI Use Red Hat JBoss Data Grid's Command Line Interface (CLI) to manually take a site from a cross-datacenter replication configuration down if it is unresponsive using the site command. The site command can be used to check the status of a site as follows: The result of this command would either be online or offline according to the current status of the named site. The command can be used to bring a site online or offline by name as follows: If the command is successful, the output ok displays after the command. As an alternate, the site can also be brought online using JMX (see Section 31.3.6, "Bring a Site Back Online" for details). For more information about the JBoss Data Grid CLI and its commands, see the Developer Guide 's chapter on the JBoss Data Grid Command Line Interface (CLI) Report a bug 31.3.6. Bring a Site Back Online After a site is taken offline, the site can be brought back online either using the JMX console to invoke the bringSiteOnline( siteName ) operation on the XSiteAdmin MBean (See Section C.23, "XSiteAdmin" for details) or using the CLI (see Section 31.3.5, "Taking a Site Offline via the CLI" for details). Report a bug | [
"<backup> <take-offline after-failures=\"USD{NUMBER}\" min-wait=\"USD{PERIOD}\" /> </backup>",
"<backup> <takeOffline afterFailures=\"USD{NUM}\" minTimeToWait=\"USD{PERIOD}\"/> </backup>",
"lon.sites().addBackup() .site(\"NYC\") .backupFailurePolicy(BackupFailurePolicy.FAIL) .strategy(BackupConfiguration.BackupStrategy.SYNC) .takeOffline() .afterFailures(500) .minTimeToWait(10000);",
"[jmx://localhost:12000/MyCacheManager/namedCache]> site --status USD{SITENAME}",
"[jmx://localhost:12000/MyCacheManager/namedCache]> site --offline USD{SITENAME}",
"[jmx://localhost:12000/MyCacheManager/namedCache]> site --online USD{SITENAME}"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-taking_a_site_offline |
Chapter 14. Using TLS certificates for applications accessing RGW | Chapter 14. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert . 14.1. Accessing External RGW server in OpenShift Data Foundation Accessing External RGW server using Object Bucket Claims The S3 credentials such as AccessKey or Secret Key is stored in the secret generated by the Object Bucket Claim (OBC) creation and you can fetch the same by using the following commands: Similarly, you can fetch the endpoint details from the configmap of OBC: Accessing External RGW server using the Ceph Object Store User CR You can fetch the S3 Credentials and endpoint details from the secret generated as part of the Ceph Object Store User CR: Important For both the access mechanisms, you can either request for new certificates from the administrator or reuse the certificates from the Kubernetes secret, ceph-rgw-tls-cert . | [
"oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d",
"oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d",
"oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode",
"oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_HOST}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_PORT}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_NAME}'",
"oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.AccessKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.SecretKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.Endpoint}' | base64 --decode"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_hybrid_and_multicloud_resources/using-tls-certificates-for-applications-accessing-rgw_rhodf |
Chapter 15. Using the Multicloud Object Gateway's Security Token Service to assume the role of another user | Chapter 15. Using the Multicloud Object Gateway's Security Token Service to assume the role of another user Multicloud Object Gateway (MCG) provides support to a security token service (STS) similar to the one provided by Amazon Web Services. To allow other users to assume the role of a certain user, you need to assign a role configuration to the user. You can manage the configuration of roles using the MCG CLI tool. The following example shows role configuration that allows two MCG users ( [email protected] and [email protected] ) to assume a certain user's role: Assign the role configuration by using the MCG CLI tool. Collect the following information before proceeding to assume the role as it is needed for the subsequent steps: The access key ID and secret access key of the assumer (the user who assumes the role) The MCG STS endpoint, which can be retrieved by using the command: The access key ID of the assumed user. The value of the role_name value in your role configuration. A name of your choice for the role session After the configuration role is ready, assign it to the appropriate user (fill with the data described in the step) - Note Adding --no-verify-ssl might be necessary depending on your cluster's configuration. The resulting output contains the access key ID, secret access key, and session token that can be used for executing actions while assuming the other user's role. You can use the credentials generated after the assume role steps as shown in the following example: | [
"'{\"role_name\": \"AllowTwoAssumers\", \"assume_role_policy\": {\"version\": \"2012-10-17\", \"statement\": [ {\"action\": [\"sts:AssumeRole\"], \"effect\": \"allow\", \"principal\": [\"[email protected]\", \"[email protected]\"]}]}}'",
"mcg sts assign-role --email <assumed user's username> --role_config '{\"role_name\": \"AllowTwoAssumers\", \"assume_role_policy\": {\"version\": \"2012-10-17\", \"statement\": [ {\"action\": [\"sts:AssumeRole\"], \"effect\": \"allow\", \"principal\": [\"[email protected]\", \"[email protected]\"]}]}}'",
"oc -n openshift-storage get route",
"AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> aws --endpoint-url <mcg-sts-endpoint> sts assume-role --role-arn arn:aws:sts::<assumed-user-access-key-id>:role/<role-name> --role-session-name <role-session-name>",
"AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> AWS_SESSION_TOKEN=<session token> aws --endpoint-url <mcg-s3-endpoint> s3 ls"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_hybrid_and_multicloud_resources/using-the-multi-cloud-object-gateway-security-token-service-to-assume-the-role-of-another-user_rhodf |
8.4. Enabling, Disabling, and Banning Cluster Resources | 8.4. Enabling, Disabling, and Banning Cluster Resources In addition to the pcs resource move and pcs resource relocate commands described in Section 8.1, "Manually Moving Resources Around the Cluster" , there are a variety of other commands you can use to control the behavior of cluster resources. You can manually stop a running resource and prevent the cluster from starting it again with the following command. Depending on the rest of the configuration (constraints, options, failures, and so on), the resource may remain started. If you specify the --wait option, pcs will wait up to 'n' seconds for the resource to stop and then return 0 if the resource is stopped or 1 if the resource has not stopped. If 'n' is not specified it defaults to 60 minutes. You can use the following command to allow the cluster to start a resource. Depending on the rest of the configuration, the resource may remain stopped. If you specify the --wait option, pcs will wait up to 'n' seconds for the resource to start and then return 0 if the resource is started or 1 if the resource has not started. If 'n' is not specified it defaults to 60 minutes. Use the following command to prevent a resource from running on a specified node, or on the current node if no node is specified. Note that when you execute the pcs resource ban command, this adds a -INFINITY location constraint to the resource to prevent it from running on the indicated node. You can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the indicated node; where the resources can run at that point depends on how you have configured your resources initially. For information on resource constraints, see Chapter 7, Resource Constraints . If you specify the --master parameter of the pcs resource ban command, the scope of the constraint is limited to the master role and you must specify master_id rather than resource_id . You can optionally configure a lifetime parameter for the pcs resource ban command to indicate a period of time the constraint should remain. For information on specifying units for the lifetime parameter and on specifying the intervals at which the lifetime parameter should be checked, see Section 8.1, "Manually Moving Resources Around the Cluster" . You can optionally configure a --wait[= n ] parameter for the pcs resource ban command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the default resource timeout will be used. You can use the debug-start parameter of the pcs resource command to force a specified resource to start on the current node, ignoring the cluster recommendations and printing the output from starting the resource. This is mainly used for debugging resources; starting resources on a cluster is (almost) always done by Pacemaker and not directly with a pcs command. If your resource is not starting, it is usually due to either a misconfiguration of the resource (which you debug in the system log), constraints that prevent the resource from starting, or the resource being disabled. You can use this command to test resource configuration, but it should not normally be used to start resources in a cluster. The format of the debug-start command is as follows. | [
"pcs resource disable resource_id [--wait[= n ]]",
"pcs resource enable resource_id [--wait[= n ]]",
"pcs resource ban resource_id [ node ] [--master] [lifetime= lifetime ] [--wait[= n ]]",
"pcs resource debug-start resource_id"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resource_control-haar |
Chapter 6. Working with accelerator profiles | Chapter 6. Working with accelerator profiles To configure accelerators for your data scientists to use in OpenShift AI, you must create an associated accelerator profile. An accelerator profile is a custom resource definition (CRD) on OpenShift that has an AcceleratorProfile resource, and defines the specification of the accelerator. You can create and manage accelerator profiles by selecting Settings Accelerator profiles on the OpenShift AI dashboard. For accelerators that are new to your deployment, you must manually configure an accelerator profile for each accelerator. If your deployment contains an accelerator before you upgrade, the associated accelerator profile remains after the upgrade. You can manage the accelerators that appear to your data scientists by assigning specific accelerator profiles to your custom notebook images. This example shows the code for a Habana Gaudi 1 accelerator profile: --- apiVersion: dashboard.opendatahub.io/v1alpha kind: AcceleratorProfile metadata: name: hpu-profile-first-gen-gaudi spec: displayName: Habana HPU - 1st Gen Gaudi description: First Generation Habana Gaudi device enabled: true identifier: habana.ai/gaudi tolerations: - effect: NoSchedule key: habana.ai/gaudi operator: Exists --- The accelerator profile code appears on the Instances tab on the details page for the AcceleratorProfile custom resource definition (CRD). For more information about accelerator profile attributes, see the following table: Table 6.1. Accelerator profile attributes Attribute Type Required Description displayName String Required The display name of the accelerator profile. description String Optional Descriptive text defining the accelerator profile. identifier String Required A unique identifier defining the accelerator resource. enabled Boolean Required Determines if the accelerator is visible in OpenShift AI. tolerations Array Optional The tolerations that can apply to notebooks and serving runtimes that use the accelerator. For more information about the toleration attributes that OpenShift AI supports, see Toleration v1 core . Additional resources Toleration v1 core Understanding taints and tolerations Managing resources from custom resource definitions 6.1. Viewing accelerator profiles If you have defined accelerator profiles for OpenShift AI, you can view, enable, and disable them from the Accelerator profiles page. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Your deployment contains existing accelerator profiles. Procedure From the OpenShift AI dashboard, click Settings Accelerator profiles . The Accelerator profiles page appears, displaying existing accelerator profiles. Inspect the list of accelerator profiles. To enable or disable an accelerator profile, on the row containing the accelerator profile, click the toggle in the Enable column. Verification The Accelerator profiles page appears appears, displaying existing accelerator profiles. 6.2. Creating an accelerator profile To configure accelerators for your data scientists to use in OpenShift AI, you must create an associated accelerator profile. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Accelerator profiles . The Accelerator profiles page appears, displaying existing accelerator profiles. To enable or disable an existing accelerator profile, on the row containing the relevant accelerator profile, click the toggle in the Enable column. Click Create accelerator profile . The Create accelerator profile dialog appears. In the Name field, enter a name for the accelerator profile. In the Identifier field, enter a unique string that identifies the hardware accelerator associated with the accelerator profile. Optional: In the Description field, enter a description for the accelerator profile. To enable or disable the accelerator profile immediately after creation, click the toggle in the Enable column. Optional: Add a toleration to schedule pods with matching taints. Click Add toleration . The Add toleration dialog opens. From the Operator list, select one of the following options: Equal - The key/value/effect parameters must match. This is the default. Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any. From the Effect list, select one of the following options: None NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. In the Key field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition. Forever - Pods stays permanently bound to a node. Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition. Click Add . Click Create accelerator profile . Verification The accelerator profile appears on the Accelerator profiles page. The Accelerator list appears on the Start a notebook server page. After you select an accelerator, the Number of accelerators field appears, which you can use to choose the number of accelerators for your notebook server. The accelerator profile appears on the Instances tab on the details page for the AcceleratorProfile custom resource definition (CRD). Additional resources Toleration v1 core Understanding taints and tolerations Managing resources from custom resource definitions 6.3. Updating an accelerator profile You can update the existing accelerator profiles in your deployment. You might want to change important identifying information, such as the display name, the identifier, or the description. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. The accelerator profile exists in your deployment. Procedure From the OpenShift AI dashboard, click Settings Notebook images . The Notebook images page appears. Previously imported notebook images are displayed. To enable or disable a previously imported notebook image, on the row containing the relevant notebook image, click the toggle in the Enable column. Click the action menu (...) and select Edit from the list. The Edit accelerator profile dialog opens. In the Name field, update the accelerator profile name. In the Identifier field, update the unique string that identifies the hardware accelerator associated with the accelerator profile, if applicable. Optional: In the Description field, update the accelerator profile. To enable or disable the accelerator profile immediately after creation, click the toggle in the Enable column. Optional: Add a toleration to schedule pods with matching taints. Click Add toleration . The Add toleration dialog opens. From the Operator list, select one of the following options: Equal - The key/value/effect parameters must match. This is the default. Exists - The key/effect parameters must match. You must leave a blank value parameter, which matches any. From the Effect list, select one of the following options: None NoSchedule - New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule - New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute - New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. In the Key field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. In the Value field, enter a toleration value. The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. In the Toleration Seconds section, select one of the following options to specify how long a pod stays bound to a node that has a node condition. Forever - Pods stays permanently bound to a node. Custom value - Enter a value, in seconds, to define how long pods stay bound to a node that has a node condition. Click Add . If your accelerator profile contains existing tolerations, you can edit them. Click the action menu (...) on the row containing the toleration that you want to edit and select Edit from the list. Complete the applicable fields to update the details of the toleration. Click Update . Click Update accelerator profile . Verification If your accelerator profile has new identifying information, this information appears in the Accelerator list on the Start a notebook server page. Additional resources Toleration v1 core Understanding taints and tolerations Managing resources from custom resource definitions 6.4. Deleting an accelerator profile To discard accelerator profiles that you no longer require, you can delete them so that they do not appear on the dashboard. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. The accelerator profile that you want to delete exists in your deployment. Procedure From the OpenShift AI dashboard, click Settings Accelerator profiles . The Accelerator profiles page appears, displaying existing accelerator profiles. Click the action menu ( ... ) beside the accelerator profile that you want to delete and click Delete . The Delete accelerator profile dialog opens. Enter the name of the accelerator profile in the text field to confirm that you intend to delete it. Click Delete . Verification The accelerator profile no longer appears on the Accelerator profiles page. Additional resources Toleration v1 core Understanding taints and tolerations Managing resources from custom resource definitions 6.5. Configuring a recommended accelerator for notebook images To help you indicate the most suitable accelerators to your data scientists, you can configure a recommended tag to appear on the dashboard. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have existing notebook images in your deployment. You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . Procedure From the OpenShift AI dashboard, click Settings Notebook images . The Notebook images page appears. Previously imported notebook images are displayed. Click the action menu (...) and select Edit from the list. The Update notebook image dialog opens. From the Accelerator identifier list, select an identifier to set its accelerator as recommended with the notebook image. If the notebook image contains only one accelerator identifier, the identifier name displays by default. Click Update . Note If you have already configured an accelerator identifier for a notebook image, you can specify a recommended accelerator for the notebook image by creating an associated accelerator profile. To do this, click Create profile on the row containing the notebook image and complete the relevant fields. If the notebook image does not contain an accelerator identifier, you must manually configure one before creating an associated accelerator profile. Verification When your data scientists select an accelerator with a specific notebook image, a tag appears to the corresponding accelerator indicating its compatibility. 6.6. Configuring a recommended accelerator for serving runtimes To help you indicate the most suitable accelerators to your data scientists, you can configure a recommended accelerator tag for your serving runtimes. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . Procedure From the OpenShift AI dashboard, click Settings > Serving runtimes . The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled in your OpenShift AI deployment. By default, the OpenVINO Model Server runtime is pre-installed and enabled in OpenShift AI. Edit your custom runtime that you want to add the recommended accelerator tag to, click the action menu (...) and select Edit . A page with an embedded YAML editor opens. Note You cannot directly edit the OpenVINO Model Server runtime that is included in OpenShift AI by default. However, you can clone this runtime and edit the cloned version. You can then add the edited clone as a new, custom runtime. To do this, click the action menu beside the OpenVINO Model Server and select Duplicate . In the editor, enter the YAML code to apply the annotation opendatahub.io/recommended-accelerators . The excerpt in this example shows the annotation to set a recommended tag for an NVIDIA GPU accelerator: metadata: annotations: opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]' Click Update . Verification When your data scientists select an accelerator with a specific serving runtime, a tag appears to the corresponding accelerator indicating its compatibility. | [
"--- apiVersion: dashboard.opendatahub.io/v1alpha kind: AcceleratorProfile metadata: name: hpu-profile-first-gen-gaudi spec: displayName: Habana HPU - 1st Gen Gaudi description: First Generation Habana Gaudi device enabled: true identifier: habana.ai/gaudi tolerations: - effect: NoSchedule key: habana.ai/gaudi operator: Exists ---",
"metadata: annotations: opendatahub.io/recommended-accelerators: '[\"nvidia.com/gpu\"]'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_accelerators/working-with-accelerator-profiles_accelerators |
2.5.6. Samba (SMB or Windows) File Serving over GFS2 | 2.5.6. Samba (SMB or Windows) File Serving over GFS2 As of the Red Hat Enterprise Linux 6.2 release, you can use Samba (SMB or Windows) file serving from a GFS2 file system with CTDB, which allows active/active configurations. For information on Clustered Samba configuration, see the Cluster Administration document. Simultaneous access to the data in the Samba share from outside of Samba is not supported. There is currently no support for GFS2 cluster leases, which slows Samba file serving. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-samba-gfs2 |
Chapter 7. Known issues | Chapter 7. Known issues There are no known issues for this release. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_5_release_notes/known_issues |
2.2.6.4. Use TCP Wrappers To Control Access | 2.2.6.4. Use TCP Wrappers To Control Access Use TCP Wrappers to control access to either FTP daemon as outlined in Section 2.2.1.1, "Enhancing Security With TCP Wrappers" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_ftp-use_tcp_wrappers_to_control_access |
Chapter 2. Before you begin | Chapter 2. Before you begin 2.1. Run with clean target server installation Because the JBoss Server Migration Tool creates the configuration files based on the configuration of a release, it is intended to be run on a clean and unconfigured target server installation. The JBoss Server Migration Tool creates a backup of the target server's configuration files by appending .beforeMigration to the file names. It then creates totally new configuration files for the target server using the source server's configuration files, and migrates the configuration to run in the target server configuration. Warning When you run the JBoss Server Migration Tool, all changes on the target server made between installation and running the migration tool are lost. Also, be aware that if you run the tool against the target server directory more than once, the subsequent runs will overwrite the original target configuration files that were backed up on the first run of the tool. This is because each run of the tool backs up the configuration files by appending .beforeMigration , resulting in the loss of any existing backed up configuration files. 2.2. Customize the migration The JBoss Server Migration Tool provides the ability to configure logging, reporting, and the execution of migration tasks. By default, when you run the JBoss Server Migration Tool in non-interactive mode, it migrates the entire server configuration. You can configure the JBoss Server Migration Tool to customize logging and reporting output. You can also configure it to skip any part of the configuration that you do not want to migrate. Additional resources For instructions on how to configure properties to control the migration process, see Configuring the JBoss Server Migration Tool . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_the_jboss_server_migration_tool/assembly_byg-server-migration-tool_server-migration-tool |
8.54. glibc | 8.54. glibc 8.54.1. RHSA-2013:1605 - Moderate: glibc security, bug fix, and enhancement update Updated glibc packages that fix three security issues, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The glibc packages provide the standard C libraries ( libc ), POSIX thread libraries (libpthread), standard math libraries ( libm ), and the Name Server Caching Daemon ( nscd ) used by multiple programs on the system. Without these libraries, the Linux system cannot function correctly. Security Fixes CVE-2013-4332 Multiple integer overflow flaws, leading to heap-based buffer overflows, were found in glibc's memory allocator functions (pvalloc, valloc, and memalign). If an application used such a function, it could cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2013-0242 A flaw was found in the regular expression matching routines that process multibyte character input. If an application utilized the glibc regular expression matching mechanism, an attacker could provide specially-crafted input that, when processed, would cause the application to crash. CVE-2013-1914 It was found that getaddrinfo() did not limit the amount of stack memory used during name resolution. An attacker able to make an application resolve an attacker-controlled hostname or IP address could possibly cause the application to exhaust all stack memory and crash. Bug Fixes BZ#1022022 Due to a defect in the initial release of the getaddrinfo() system call in Red Hat enterprise Linux 6.0, AF_INET and AF_INET6 queries resolved from the /etc/hosts file returned queried names as canonical names. This incorrect behavior is, however, still considered to be the expected behavior. As a result of a recent change in getaddrinfo(), AF_INET6 queries started resolving the canonical names correctly. However, this behavior was unexpected by applications that relied on queries resolved from the /etc/hosts file, and these applications could thus fail to operate properly. This update applies a fix ensuring that AF_INET6 queries resolved from /etc/hosts always return the queried name as canonical. Note that DNS lookups are resolved properly and always return the correct canonical names. A proper fix to AF_INET6 queries resolution from /etc/hosts may be applied in future releases; for now, due to a lack of standard, Red Hat suggests the first entry in the /etc/hosts file, that applies for the IP address being resolved, to be considered the canonical entry. BZ# 552960 The pthread_cond_wait() and pthread_cond_timedwait() functions for AMD64, Intel 64, and Intel P6 architectures contained several synchronizations bugs. Consequently, when a multi-threaded program used a priority-inherited mutex to synchronize access to a condition variable, some threads could enter a deadlock situation when they were woken up by the pthread_cond_signal() function or canceled. This update fixes these synchronization bugs and a thread deadlock can no longer occur in the described scenario. BZ#834386 The C library security framework was unable to handle dynamically loaded character conversion routines when loaded at specific virtual addresses. This resulted in an unexpected termination with a segmentation fault when trying to use the dynamically loaded character conversion routine. This update enhances the C library security framework to handle dynamically loaded character conversion routines at any virtual memory address, and crashes no longer occur in the described scenario. BZ# 848748 Due to a defect in the standard C library, the library could allocate unbounded amounts of memory and eventually terminate unexpectedly when processing a corrupted NIS request. With this update, the standard C library has been fixed to limit the size of NIS records to the maximum of 16 MB, and the library no longer crashes in this situation. However, it is possible that some configurations with very large NIS maps may no longer work if those maps exceed the maximum of 16 MB. BZ#851470 Previously, the ttyname() and ttyname_r() library calls returned an error if the proc (/proc/) file system was not mounted. As a result, certain applications could not properly run in a chroot environment. With this update, if the ttyname() and ttyname_r() calls cannot read the /proc/self/fd/ directory, they attempt to obtain the name of the respective terminal from the devices known to the system (the /dev and /dev/pts directories) rather than immediately return an error. Applications running in a chroot environment now work as expected. BZ#862094 A defect in the standard C library resulted in an attempt to free memory that was not allocated with the malloc() function. Consequently, the dynamic loader could terminate unexpectedly when loading shared libraries that require the dynamic loader to search non-default directories. The dynamic loader has been modified to avoid calling the free() routine for memory that was not allocated using malloc() and no longer crashes in this situation. BZ#863384 Due to a defect in the getaddrinfo() resolver system call, getaddrinfo() could, under certain conditions, return results that were not Fully Qualified Domain Names (FQDN) when FQDN results were requested. Applications using getaddrinfo() that expected FQDN results could fail to operate correctly. The resolver has been fixed to return FQDN results as expected when requesting an FQDN result and the AI_CANONNAME flag is set. BZ#868808 The backtrace() function did not print call frames correctly on the AMD64 and Intel 64 architecture if the call stack contained a recursive function call. This update fixes this behavior so backtrace() now prints call frames as expected. BZ# 903754 Debug information previously contained the name "fedora" which could lead to confusion and the respective package could be mistaken for a Fedora-specific package. To avoid this confusion, the package build framework has been changed to ensure that the debug information no longer contains the name "fedora." BZ#919562 A program that opened and used dynamic libraries which used thread-local storage variables may have terminated unexpectedly with a segmentation fault when it was being audited by a module that also used thread-local storage. This update modifies the dynamic linker to detect such a condition, and crashes no longer occur in the described scenario. BZ#928318 When the /etc/resolv.conf file was missing on the system or did not contain any nameserver entries, getaddrinfo() failed instead of sending a DNS query to the local DNS server. This bug has been fixed and getaddrinfo() now queries the local DNS server in this situation. BZ# 929388 A fix to prevent logic errors in various mathematical functions, including exp(), exp2(), expf(), exp2f(), pow(), sin(), tan(), and rint(), created CPU performance regressions for certain inputs. The performance regressions have been analyzed and the core routines have been optimized to raise CPU performance to expected levels. BZ# 952422 Previously, multi-threaded applications using the QReadWriteLocks locking mechanism could experience performance issues under heavy load. This happened due to the ineffectively designed sysconf() function that was repeatedly called from the Qt library. This update improves the glibc implementation of sysconf() by caching the value of the _SC_NPROCESSORS_ONLN variable so the system no longer spends extensive amounts of time by parsing the /stat/proc file. Performance of the aforementioned applications, as well as applications repetitively requesting the value of _SC_NPROCESSORS_ONLN, should significantly improve. BZ# 966775 Improvements to the accuracy of the floating point functions in the math library, which were introduced by the RHBA-2013:0279 advisory, led to a performance decrease for those functions. With this update, the performance loss regressions have been analyzed and a fix has been applied that retains the current accuracy but reduces the performance penalty to acceptable levels. BZ#966778 If user groups were maintained on an NIS server and queried over the NIS compat interface, queries for user groups containing a large number of users could return an incomplete list of users. This update fixes multiple bugs in the compat interface so that group queries in the described scenario now return correct results. BZ#970090 Due to a defect in the name service cache daemon (nscd), cached DNS queries returned, under certain conditions, only IPv4 addresses even though the AF_UNSPEC address family was specified and both IPv4 and IPv6 results existed. The defect has been corrected and nscd now correctly returns both IPv4 and IPv6 results in this situation. BZ#988931 Due to a defect in the dynamic loader, the loader attempted to write to a read-only page in memory while loading a prelinked dynamic application. This resulted in all prelinked applications being terminated unexpectedly during startup. The defect in the dynamic loader has been corrected and prelinked applications no longer crash in this situation. Enhancements BZ#629823 versions of nscd did not cache netgroup queries. The lack of netgroup caching could result in less than optimal performance for users that relied on heavily on netgroup maps in their system configurations. With this update, support for netgroup query caching has been added to nscd. Systems that rely heavily on netgroup maps and use nscd for caching will now have their netgroup queries cached which should improve performance in most configurations. BZ#663641 Previously, if users wanted to adjust the size of stacks created for new threads, they had to modify the program code. With this update, glibc adds a new GLIBC_PTHREAD_STACKSIZE environment variable allowing users to set the desired default thread stack size in bytes. The variable affects the threads created with the pthread_create() function and default attributes. The default thread stack size may be slightly larger than the requested size due to memory alignment and certain other factors. BZ#886968 The dynamic loader now coordinates with GDB to provide an interface that is used to improve the performance of debugging applications with very large lists of loaded libraries. BZ#905575 The glibc packages now provide four Static Defined Tracing (SDT) probes in the libm libraries for the pow() and exp() functions. The SDT probes can be used to detect whether the input to the functions causes the routines to execute the multi-precision slow paths. This information can be used to detect performance problems in applications calling the pow() and exp() functions. BZ#916986 Support for the MAP_HUGETLB and MAP_STACK flags have been added for use with the mmap() function. Their support is dependant on kernel support and applications calling mmap() should always examine the result of the function to determine the result of the call. BZ#929302 Performance of the sched_getcpu() function has been improved by calling the Virtual Dynamic Shared Object (VDSO) implementation of the getcpu() system call on the PowerPC architecture. BZ#970776 The error string for the ESTALE error code has been updated to print "Stale file handle" instead of "Stale NFS file handle", which should prevent confusion over the meaning of the error. The error string has been translated to all supported languages. All glibc users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/glibc |
5.3. Virtual Disks | 5.3. Virtual Disks 5.3.1. Adding a New Virtual Disk You can add multiple virtual disks to a virtual machine. Image is the default type of disk. You can also add a Direct LUN disk or a Cinder (OpenStack Volume) disk. Image disk creation is managed entirely by the Manager. Direct LUN disks require externally prepared targets that already exist. Cinder disks require access to an instance of OpenStack Volume that has been added to the Red Hat Virtualization environment using the External Providers window; see Adding an OpenStack Volume (Cinder) Instance for Storage Management for more information. Existing disks are either floating disks or shareable disks attached to virtual machines. Adding Disks to Virtual Machines Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Disks tab. Click New . Use the appropriate radio buttons to switch between Image , Direct LUN , or Cinder . Enter a Size(GB) , Alias , and Description for the new disk. Use the drop-down lists and check boxes to configure the disk. See Section A.4, "Explanation of Settings in the New Virtual Disk and Edit Virtual Disk Windows" for more details on the fields for all disk types. Click OK . The new disk appears in the details view after a short time. 5.3.2. Attaching an Existing Disk to a Virtual Machine Floating disks are disks that are not associated with any virtual machine. Floating disks can minimize the amount of time required to set up virtual machines. Designating a floating disk as storage for a virtual machine makes it unnecessary to wait for disk preallocation at the time of a virtual machine's creation. Floating disks can be attached to a single virtual machine, or to multiple virtual machines if the disk is shareable. Each virtual machine that uses the shared disk can use a different disk interface type. Once a floating disk is attached to a virtual machine, the virtual machine can access it. Attaching Virtual Disks to Virtual Machines Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Disks tab. Click Attach . Select one or more virtual disks from the list of available disks and select the required interface from the Interface drop-down. Click OK . Note No Quota resources are consumed by attaching virtual disks to, or detaching virtual disks from, virtual machines. 5.3.3. Extending the Available Size of a Virtual Disk You can extend the available size of a virtual disk while the virtual disk is attached to a virtual machine. Resizing a virtual disk does not resize the underlying partitions or file systems on that virtual disk. Use the fdisk utility to resize the partitions and file systems as required. See How to Resize a Partition using fdisk for more information. Extending the Available Size of Virtual Disks Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Disks tab and select the disk to edit. Click Edit . Enter a value in the Extend size by(GB) field. Click OK . The target disk's status becomes locked for a short time, during which the drive is resized. When the resizing of the drive is complete, the status of the drive becomes OK . 5.3.4. Hot Plugging a Virtual Disk You can hot plug virtual disks. Hot plugging means enabling or disabling devices while a virtual machine is running. Note The guest operating system must support hot plugging virtual disks. Hot Plugging Virtual Disks Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Disks tab and select the virtual disk to hot plug. Click More Actions ( ), then click Activate to enable the disk, or Deactivate to disable the disk. Click OK . 5.3.5. Removing a Virtual Disk from a Virtual Machine Removing Virtual Disks From Virtual Machines Click Compute Virtual Machines . Click a virtual machine name to go to the details view. Click the Disks tab and select the virtual disk to remove. Click More Actions ( ), then click Deactivate . Click OK . Click Remove . Optionally, select the Remove Permanently check box to completely remove the virtual disk from the environment. If you do not select this option - for example, because the disk is a shared disk - the virtual disk will remain in Storage Disks . Click OK . If the disk was created as block storage, for example iSCSI, and the Wipe After Delete check box was selected when creating the disk, you can view the log file on the host to confirm that the data has been wiped after permanently removing the disk. See Settings to Wipe Virtual Disks After Deletion in the Administration Guide . If the disk was created as block storage, for example iSCSI, and the Discard After Delete check box was selected on the storage domain before the disk was removed, a blkdiscard command is called on the logical volume when it is removed and the underlying storage is notified that the blocks are free. See Setting Discard After Delete for a Storage Domain in the Administration Guide . A blkdiscard is also called on the logical volume when a virtual disk is removed if the virtual disk is attached to at least one virtual machine with the Enable Discard check box selected. 5.3.6. Importing a Disk Image from an Imported Storage Domain You can import floating virtual disks from an imported storage domain. This procedure requires access to the Administration Portal. Note Only QEMU-compatible disks can be imported into the Manager. Importing a Disk Image Click Storage Domains . Click an imported storage domain to go to the details view. Click Disk Import . Select one or more disk images and click Import to open the Import Disk(s) window. Select the appropriate Disk Profile for each disk. Click OK to import the selected disks. 5.3.7. Importing an Unregistered Disk Image from an Imported Storage Domain You can import floating virtual disks from a storage domain. Floating disks created outside of a Red Hat Virtualization environment are not registered with the Manager. Scan the storage domain to identify unregistered floating disks to be imported. This procedure requires access to the Administration Portal. Note Only QEMU-compatible disks can be imported into the Manager. Importing a Disk Image Click Storage Domains . Click More Actions ( ), then click Scan Disks so that the Manager can identify unregistered disks. Select an unregistered disk name and click Disk Import . Select one or more disk images and click Import to open the Import Disk(s) window. Select the appropriate Disk Profile for each disk. Click OK to import the selected disks. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-virtual_disks |
Chapter 16. Suspending applications | Chapter 16. Suspending applications This guide explains how to disable all keys and access tokens for an application. If an application is misusing your API and affecting other traffic, you may need to quickly suspend its operations before contacting the developer involved to ask them to amend their code or configuration. 16.1. Find the application You can find the application from the Accounts or Applications tabs or by searching as described here . 16.2. Disable the application Once you have located the application and see the application summary page, click on the suspend icon to the State value. This action will immediately disable the application from the API and suspend all keys from working. Calls with these application keys will be rejected by the control system. The application can be unsuspended using the same button once the problematic behavior has been rectified. Note If you use caching in your agents, suspension may not be immediate but require a short timeout. 16.3. Contact the developer How you contact the developer of the application will depend on your workflow and policy. On the same page, you can click on the account name, which will take you to the account view where you can identify the key administrator of the account that owns the application. You can contact them either by email or by clicking on the send message button as shown, which will generate a dashboard message for the user. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/suspend-application |
Chapter 29. TopoLVM APIs | Chapter 29. TopoLVM APIs 29.1. TopoLVM APIs 29.1.1. LogicalVolume [topolvm.io/v1] Description LogicalVolume is the Schema for the logicalvolumes API Type object 29.2. LogicalVolume [topolvm.io/v1] Description LogicalVolume is the Schema for the logicalvolumes API Type object 29.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object LogicalVolumeSpec defines the desired state of LogicalVolume status object LogicalVolumeStatus defines the observed state of LogicalVolume 29.2.1.1. .spec Description LogicalVolumeSpec defines the desired state of LogicalVolume Type object Required name nodeName size Property Type Description accessType string 'accessType' specifies how the user intends to consume the snapshot logical volume. Set to "ro" when creating a snapshot and to "rw" when restoring a snapshot or creating a clone. This field is populated only when LogicalVolume has a source. deviceClass string name string nodeName string size integer-or-string source string 'source' specifies the logicalvolume name of the source; if present. This field is populated only when LogicalVolume has a source. 29.2.1.2. .status Description LogicalVolumeStatus defines the observed state of LogicalVolume Type object Property Type Description code integer A Code is an unsigned 32-bit error code as defined in the gRPC spec. currentSize integer-or-string message string volumeID string INSERT ADDITIONAL STATUS FIELD - define observed state of cluster Important: Run "make" to regenerate code after modifying this file 29.2.2. API endpoints The following API endpoints are available: /apis/topolvm.io/v1/logicalvolumes DELETE : delete collection of LogicalVolume GET : list objects of kind LogicalVolume POST : create a LogicalVolume /apis/topolvm.io/v1/logicalvolumes/{name} DELETE : delete a LogicalVolume GET : read the specified LogicalVolume PATCH : partially update the specified LogicalVolume PUT : replace the specified LogicalVolume /apis/topolvm.io/v1/logicalvolumes/{name}/status GET : read status of the specified LogicalVolume PATCH : partially update status of the specified LogicalVolume PUT : replace status of the specified LogicalVolume 29.2.2.1. /apis/topolvm.io/v1/logicalvolumes Table 29.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of LogicalVolume Table 29.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 29.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind LogicalVolume Table 29.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 29.5. HTTP responses HTTP code Reponse body 200 - OK LogicalVolumeList schema 401 - Unauthorized Empty HTTP method POST Description create a LogicalVolume Table 29.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 29.7. Body parameters Parameter Type Description body LogicalVolume schema Table 29.8. HTTP responses HTTP code Reponse body 200 - OK LogicalVolume schema 201 - Created LogicalVolume schema 202 - Accepted LogicalVolume schema 401 - Unauthorized Empty 29.2.2.2. /apis/topolvm.io/v1/logicalvolumes/{name} Table 29.9. Global path parameters Parameter Type Description name string name of the LogicalVolume Table 29.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a LogicalVolume Table 29.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 29.12. Body parameters Parameter Type Description body DeleteOptions schema Table 29.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified LogicalVolume Table 29.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 29.15. HTTP responses HTTP code Reponse body 200 - OK LogicalVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified LogicalVolume Table 29.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 29.17. Body parameters Parameter Type Description body Patch schema Table 29.18. HTTP responses HTTP code Reponse body 200 - OK LogicalVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified LogicalVolume Table 29.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 29.20. Body parameters Parameter Type Description body LogicalVolume schema Table 29.21. HTTP responses HTTP code Reponse body 200 - OK LogicalVolume schema 201 - Created LogicalVolume schema 401 - Unauthorized Empty 29.2.2.3. /apis/topolvm.io/v1/logicalvolumes/{name}/status Table 29.22. Global path parameters Parameter Type Description name string name of the LogicalVolume Table 29.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified LogicalVolume Table 29.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 29.25. HTTP responses HTTP code Reponse body 200 - OK LogicalVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified LogicalVolume Table 29.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 29.27. Body parameters Parameter Type Description body Patch schema Table 29.28. HTTP responses HTTP code Reponse body 200 - OK LogicalVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified LogicalVolume Table 29.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 29.30. Body parameters Parameter Type Description body LogicalVolume schema Table 29.31. HTTP responses HTTP code Reponse body 200 - OK LogicalVolume schema 201 - Created LogicalVolume schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/topolvm-apis-1 |
Appendix A. Tools and tips for troubleshooting and bug reporting | Appendix A. Tools and tips for troubleshooting and bug reporting The troubleshooting information in the following sections might be helpful when diagnosing issues at the start of the installation process. The following sections are for all supported architectures. However, if an issue is for a particular architecture, it is specified at the start of the section. A.1. Dracut Dracut is a tool that manages the initramfs image during the Linux operating system boot process. The dracut emergency shell is an interactive mode that can be initiated while the initramfs image is loaded. You can run basic troubleshooting commands from the dracut emergency shell. For more information, see the Troubleshooting section of the dracut man page on your system. A.2. Using installation log files For debugging purposes, the installation program logs installation actions in files that are located in the /tmp directory. These log files are listed in the following table. Table A.1. Log files generated during the installation Log file Contents /tmp/anaconda.log General messages. /tmp/program.log All external programs run during the installation. /tmp/storage.log Extensive storage module information. /tmp/packaging.log dnf and rpm package installation messages. /tmp/dbus.log Information about the dbus session that is used for installation program modules. /tmp/sensitive-info.log Configuration information that is not part of other logs and not copied to the installed system. /tmp/syslog Hardware-related system messages. This file contains messages from other Anaconda files. If the installation fails, the messages are consolidated into /tmp/anaconda-tb-identifier , where identifier is a random string. After a successful installation, these files are copied to the installed system under the directory /var/log/anaconda/ . However, if the installation is unsuccessful, or if the inst.nosave=all or inst.nosave=logs options are used when booting the installation system, these logs only exist in the installation program's RAM disk. This means that the logs are not saved permanently and are lost when the system is powered down. To store them permanently, copy the files to another system on the network or copy them to a mounted storage device such as a USB flash drive. A.2.1. Creating pre-installation log files Use this procedure to set the inst.debug option to create log files before the installation process starts. These log files contain, for example, the current storage configuration. Prerequisites The Red Hat Enterprise Linux boot menu is open. Procedure Select the Install Red Hat Enterprise Linux option from the boot menu. Press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected boot options. Append inst.debug to the options. For example: Press the Enter key on your keyboard. The system stores the pre-installation log files in the /tmp/pre-anaconda-logs/ directory before the installation program starts. To access the log files, switch to the console. Change to the /tmp/pre-anaconda-logs/ directory: Additional resources Boot options reference Console logging during installation A.2.2. Transferring installation log files to a USB drive Use this procedure to transfer installation log files to a USB drive. Prerequisites You have backed up data from the USB drive. You are logged into a root account and you have access to the installation program's temporary file system. Procedure Press Ctrl + Alt + F2 to access a shell prompt on the system you are installing. Connect a USB flash drive to the system and run the dmesg command: A log detailing all recent events is displayed. At the end of this log, a set of messages is displayed. For example: Note the name of the connected device. In the above example, it is sdb . Navigate to the /mnt directory and create a new directory that serves as the mount target for the USB drive. This example uses the name usb : Mount the USB flash drive onto the newly created directory. In most cases, you do not want to mount the whole drive, but a partition on it. Do not use the name sdb , use the name of the partition you want to write the log files to. In this example, the name sdb1 is used: Verify that you mounted the correct device and partition by accessing it and listing its contents: Copy the log files to the mounted device. Unmount the USB flash drive. If you receive an error message that the target is busy, change your working directory to outside the mount (for example, /). A.2.3. Transferring installation log files over the network Use this procedure to transfer installation log files over the network. Prerequisites You are logged into a root account and you have access to the installation program's temporary file system. Procedure Press Ctrl + Alt + F2 to access a shell prompt on the system you are installing. Switch to the /tmp directory where the log files are located: Copy the log files onto another system on the network using the scp command: Replace user with a valid user name on the target system, address with the target system's address or host name, and path with the path to the directory where you want to save the log files. For example, if you want to log in as john on a system with an IP address of 192.168.0.122 and place the log files into the /home/john/logs/ directory on that system, the command is as follows: When connecting to the target system for the first time, the SSH client asks you to confirm that the fingerprint of the remote system is correct and that you want to continue: Type yes and press Enter to continue. Provide a valid password when prompted. The files are transferred to the specified directory on the target system. A.3. Detecting memory faults using the Memtest86 application Faults in memory (RAM) modules can cause your system to fail unpredictably. In certain situations, memory faults might only cause errors with particular combinations of software. For this reason, you should test your system's memory before you install Red Hat Enterprise Linux. Red Hat Enterprise Linux includes the Memtest86+ memory testing application for BIOS systems only. Support for UEFI systems is currently unavailable. A.3.1. Running Memtest86 Use this procedure to run the Memtest86 application to test your system's memory for faults before you install Red Hat Enterprise Linux. Prerequisites You have accessed the Red Hat Enterprise Linux boot menu. Procedure From the Red Hat Enterprise Linux boot menu, select Troubleshooting > Run a memory test . The Memtest86 application window is displayed and testing begins immediately. By default, Memtest86 performs ten tests in every pass. After the first pass is complete, a message is displayed in the lower part of the window informing you of the current status. Another pass starts automatically. If Memtest86+ detects an error, the error is displayed in the central pane of the window and is highlighted in red. The message includes detailed information such as which test detected a problem, the memory location that is failing, and others. In most cases, a single successful pass of all 10 tests is sufficient to verify that your RAM is in good condition. In rare circumstances, however, errors that went undetected during the first pass might appear on subsequent passes. To perform a thorough test on important systems, run the tests overnight or for a few days to complete multiple passes. The amount of time it takes to complete a single full pass of Memtest86+ varies depending on your system's configuration, notably the RAM size and speed. For example, on a system with 2 GiB of DDR2 memory at 667 MHz, a single pass takes 20 minutes to complete. Optional: Follow the on-screen instructions to access the Configuration window and specify a different configuration. To halt the tests and reboot your computer, press the Esc key at any time. Additional resources How to use Memtest86 A.4. Verifying boot media Verifying ISO images helps to avoid problems that are sometimes encountered during installation. These sources include DVD and ISO images stored on a disk or NFS server. Use this procedure to test the integrity of an ISO-based installation source before using it to install Red Hat Enterprise Linux. Prerequisites You have accessed the Red Hat Enterprise Linux boot menu. Procedure From the boot menu, select Test this media & install Red Hat Enterprise Linux 9 to test the boot media. The boot process tests the media and highlights any issues. Optional: You can start the verification process by appending rd.live.check to the boot command line. A.5. Consoles and logging during installation The Red Hat Enterprise Linux installer uses the tmux terminal multiplexer to display and control several windows in addition to the main interface. Each of these windows serve a different purpose; they display several different logs, which can be used to troubleshoot issues during the installation process. One of the windows provides an interactive shell prompt with root privileges, unless this prompt was specifically disabled using a boot option or a Kickstart command. The terminal multiplexer is running in virtual console 1. To switch from the actual installation environment to tmux , press Ctrl + Alt + F1 . To go back to the main installation interface which runs in virtual console 6, press Ctrl + Alt + F6 . During the text mode installation, start in virtual console 1 ( tmux ), and switching to console 6 will open a shell prompt instead of a graphical interface. The console running tmux has five available windows; their contents are described in the following table, along with keyboard shortcuts. Note that the keyboard shortcuts are two-part: first press Ctrl + b , then release both keys, and press the number key for the window you want to use. You can also use Ctrl + b n , Alt+ Tab , and Ctrl + b p to switch to the or tmux window, respectively. Table A.2. Available tmux windows Shortcut Contents Ctrl + b 1 Main installation program window. Contains text-based prompts (during text mode installation or if you use VNC direct mode), and also some debugging information. Ctrl + b 2 Interactive shell prompt with root privileges. Ctrl + b 3 Installation log; displays messages stored in /tmp/anaconda.log . Ctrl + b 4 Storage log; displays messages related to storage devices and configuration, stored in /tmp/storage.log . Ctrl + b 5 Program log; displays messages from utilities executed during the installation process, stored in /tmp/program.log . A.6. Saving screenshots You can press Shift + Print Screen at any time during the graphical installation to capture the current screen. The screenshots are saved to /tmp/anaconda-screenshots . A.7. Display settings and device drivers Some video cards have trouble booting into the Red Hat Enterprise Linux graphical installation program. If the installation program does not run using its default settings, it attempts to run in a lower resolution mode. If that fails, the installation program attempts to run in text mode. There are several possible solutions to resolve display issues, most of which involve specifying custom boot options: For more information, see Console boot options . Table A.3. Solutions Solution Description Use the text mode You can attempt to perform the installation using the text mode. For details, refer to Installing RHEL in text mode . Specify the display resolution manually If the installation program fails to detect your screen resolution, you can override the automatic detection and specify it manually. To do this, append the inst.resolution=x option at the boot menu, where x is your display's resolution, for example, 1024x768. Use an alternate video driver You can attempt to specify a custom video driver, overriding the installation program's automatic detection. To specify a driver, use the inst.xdriver=x option, where x is the device driver you want to use (for example, nouveau)*. Perform the installation using VNC If the above options fail, you can use a separate system to access the graphical installation over the network, using the Virtual Network Computing (VNC) protocol. For details on installing using VNC, see Preparing a remote installation by using VNC . If specifying a custom video driver solves your problem, you should report it as a bug in Jira . The installation program should be able to detect your hardware automatically and use the appropriate driver without intervention. | [
"vmlinuz ... inst.debug",
"cd /tmp/pre-anaconda-logs/",
"dmesg",
"[ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk",
"mkdir usb",
"mount /dev/sdb1 /mnt/usb",
"cd /mnt/usb",
"ls",
"cp /tmp/*log /mnt/usb",
"umount /mnt/usb",
"cd /tmp",
"scp *log user@address:path",
"scp *log [email protected]:/home/john/logs/",
"The authenticity of host '192.168.0.122 (192.168.0.122)' can't be established. ECDSA key fingerprint is a4:60:76:eb:b2:d0:aa:23:af:3d:59:5c:de:bb:c4:42. Are you sure you want to continue connecting (yes/no)?"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/troubleshooting-at-the-start-of-the-installation_rhel-installer |
B.2. How to Set Up Red Hat Virtualization Manager to Use Ethtool | B.2. How to Set Up Red Hat Virtualization Manager to Use Ethtool You can configure ethtool properties for host network interface cards from the Administration Portal. The ethtool_opts key is not available by default and needs to be added to the Manager using the engine configuration tool. You also need to install the required VDSM hook package on the hosts. Adding the ethtool_opts Key to the Manager On the Manager, run the following command to add the key: # engine-config -s UserDefinedNetworkCustomProperties=ethtool_opts=.* --cver=4.4 Restart the ovirt-engine service: # systemctl restart ovirt-engine.service On the hosts that you want to configure ethtool properties, install the VDSM hook package. The package is available by default on Red Hat Virtualization Host but needs to be installed on Red Hat Enterprise Linux hosts. # dnf install vdsm-hook-ethtool-options The ethtool_opts key is now available in the Administration Portal. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts to apply ethtool properties to logical networks. | [
"engine-config -s UserDefinedNetworkCustomProperties=ethtool_opts=.* --cver=4.4",
"systemctl restart ovirt-engine.service",
"dnf install vdsm-hook-ethtool-options"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/How_to_Set_Up_Red_Hat_Enterprise_Virtualization_Manager_to_Use_Ethtool |
Subsets and Splits